elliotgao2 / gain

Web crawling framework based on asyncio.
GNU General Public License v3.0
2.04k stars 207 forks source link

Limit the interval between two requests. #29

Closed elliotgao2 closed 6 years ago

elliotgao2 commented 7 years ago
class MySpider(Spider):
    interval = 5 #seconds
    headers = {'User-Agent': 'Google Spider'}
    start_url = 'https://blog.scrapinghub.com/'
    parsers = [Parser('https://blog.scrapinghub.com/page/\d+/'),
               Parser('https://blog.scrapinghub.com/\d{4}/\d{2}/\d{2}/[a-z0-9\-]+/', Post)]

Then a request after another reqeust should wait for 5 seconds. and the concurrency will be invalid.

wisecsj commented 7 years ago

Add time.sleep(interval) in the end of async def fetch :smirk:

elliotgao2 commented 7 years ago

@Jie-OY 666. But is this works well?

wisecsj commented 7 years ago

(sorry for my late reply firstly) No...It's not works as expected

rainfd commented 7 years ago

Why not asyncio.sleep? If yes,what's the reason?

wisecsj commented 7 years ago

@rainfd You inspired me!

As for why,you can test with the following method if you argee with me. :smile:

@gaojiuli @rainfd I think,before how to solve the problem,we more should get a way to test whether the solution is right.

I used wrong way to test before: add timestamp when write data stream to file as the follow shows:

await f.write(str(datetime.datetime.now())+' '+self.results['title']+'\n')

And just now i found it's wrong obviously.

Now,i think the right test way is to print timestmap in async def fetch.The excerpt as follows(assume the interval is 1s):

async def fetch(url, spider, session, semaphore):
    with (await semaphore):
        try:
            if callable(spider.headers):
                headers = spider.headers()
            else:
                headers = spider.headers

            time.sleep(1)
            print(datetime.now())
            async with session.get(url, headers=headers) as response:

what do you think about the test way?And the results are as expected(But still has a problem,the program will not exit finally).

elliotgao2 commented 7 years ago

@rainfd @Jie-OY The asyncio.sleep() could limit the interval of the start time of two requests, not limit the interval between the end time of first request and the start time of the second request, which not our purpose. A request shouldn't be sent before the previous request finished if we set the interval.

wisecsj commented 7 years ago

@gaojiuli

I have different opinions. Whether you put asyncio.sleep() before or behind async with session.get(url, headers=headers) as response: ,both couldn't limit the interval of the start time of two neighboring requests. And the two results are quite same that serveral requests would be sent almost at the same time at intervals.

As for

A request shouldn't be sent before the previous request finished if we set the interval. limit the interval between the end time of first request and the start time of the second request

I know that my solution can only limit the interval of the start time of two requests, but why not this?

I thought set interval is just to limit the interval of the start time of two requests...:stuck_out_tongue: