I am using Update: I tried this code that brings everything including html To retrieve remote pages, the speed, time and time of connection time Remote server depends on You can not do a lot about these things, and will use the alternative HTTP recovery method only in the right way. P> If the remote page is too large, it only makes sense to bring only partially. To download the full page, instead of file_get_contents, try: p> use it instead of your substrate method: p> file_get_contents to get a title from the link, but it contains about 20 - takes 30 seconds for execution.
$ page = fread (fopen ($ url, "r"), 1000); // $ 2 first headline $ Starstart = Stropo ($ page, 'title & gt;') + 7; $ Heading length = strapo ($ page, '
$ page = fread (fopen ($ url, "r"), 2048); // For the first 2 kb code> pre>
preg_match ('# & lt; title & gt; . *?) & Lt; / title & gt; # ', $ page, $ match); $ Meta ["title"] = $ match [1]; Code> pre> div> html>
Comments
Post a Comment