Closed ollietreend closed 7 years ago
Hello @ollietreend sorry for being late in reply,
I think your suggestion is really useful, and I prefer that it is implemented either as optional parameter passed to constructor or as setter method, something like setCrawlerOptions(array $options)
please feel free to put a PR for this..
Hello @ollietreend
the capability to pass options to guzzle client is now added,
As you suggested, this can be done by passing options as third parameter to constructor or by using setCrawlerOptions
method
<?php
//third parameter is the options used to configure guzzle client
$crawler = new \Arachnid\Crawler('http://github.com',2,
['auth'=>array('username', 'password')]);
//or using separate method `setCrawlerOptions`
$options = array(
'curl' => array(
CURLOPT_SSL_VERIFYHOST => false,
CURLOPT_SSL_VERIFYPEER => false,
),
'timeout' => 30,
'connect_timeout' => 30,
);
$crawler->setCrawlerOptions($options);
Does this config now look like this, as there's no setCrawlerOptions method ?
//or using separate method `setCrawlerOptions`
$options = array(
'curl' => array(
CURLOPT_SSL_VERIFYHOST => false,
CURLOPT_SSL_VERIFYPEER => false,
),
'timeout' => 30,
'connect_timeout' => 30,
);
$crawler = new Crawler($url, $linkDepth, $options);
Never mind, I re-read the readme instructions:
<?php
//third parameter is the options used to configure guzzle client
$crawler = new \Arachnid\Crawler('http://github.com',2,
['auth'=>array('username', 'password')]);
//or using separate method `setCrawlerOptions`
$options = array(
'curl' => array(
CURLOPT_SSL_VERIFYHOST => false,
CURLOPT_SSL_VERIFYPEER => false,
),
'timeout' => 30,
'connect_timeout' => 30,
);
$scrapperClient = \Arachnid\Adapters\CrawlingFactory::create(\Arachnid\Adapters\CrawlingFactory::TYPE_GOUTTE,$options);
$crawler->setScrapClient($scrapperClient);
I have performed a major change on the package in order to allow two types of scrapper; one is the usual Goutte client and the other is headless browser mode - based on Symfony Panther that supports javascript rendering; that's why settings CrawlerOptions is now different according to each adapter
It's not currently possible to configure a timeout for the Guzzle client which is used to make HTTP requests when spidering a site. Without a default, Guzzle defaults to 0 timeout – i.e. it'll wait indefinitely until a response is received. (Which arguably isn't a sensible default anyway.)
I'm trying to spider a site which contains a link to a dead server. Requests to the URL never timeout, meaning the spider process gets stuck on this URL and never proceeds.
The timeout is configured when constructing a new Guzzle client, which is currently done in
Arachnid\Crawler::getScrapClient()
:It would be really helpful if a timeout was configured here. To do that, all we need to do is change the configuration array which is passed to the Guzzle client constructor method:
I think a sensible default would be a 30 second timeout, but it would be great to have that configurable. That could either be an additional parameter in the constructor method, or alternatively an object property which can be changed.
In fact – it might make sense to allow us to add anything to the Guzzle constructor configuration. Perhaps again by means of a class property or constructor parameter whereby we can pass in an array of configuration options. This could be useful when configuring other client options, for example HTTP authentication:
Thoughts? I'd be happy to put together a PR for this, provided we can get some agreement on how this should be configured (class constructor, public property, static property, getter/setter, etc.)