tasos-py / Search-Engines-Scraper

Search google, bing, yahoo, and other search engines with python
MIT License
538 stars 145 forks source link

ERROR HTTP 429 #10

Open noman00910 opened 4 years ago

noman00910 commented 4 years ago

Hi, your code is working great, however, I am facing this too many requests error (429) after running 3 or 4 queries. I changed proxies as well, still, it gives the same error. Is it about user agents?

tasos-py commented 4 years ago

Thanks for bringing this to my attention. I don't know if changing User-Agent will be enough, but you could give it a try. Which search engine produces this error? If I remember correcly Google is the most "sensitive" one and usually increasing the delay (with engine._delay) helps. Clearing cookies may help too. This library uses a requests.Session() object for HTTP request, which persists cookies and other HTTP parameters. You could try setting a new client with engine._http_client = http_client.HttpClient(proxy=proxy). If that doesn't help then it's possible that the search engine is detecting us with URL or POST data parameters and unless it requires Js code execution, we may be able to solve it with engine._first_page() and engine._next_page() methods.

noman00910 commented 4 years ago

Yes, I am using it with Google. Thanks for your guide, I will try to make changes according to your instructions. Let's see where it goes.

nershman commented 3 years ago

I've noticed the ban usually occurs after 4 or 5 searches in a row which returned no results.

ljhOfGithub commented 2 years ago

I meet the problem,too.How to deal the problem?I try time.sleep(1) after search() and it works briefly.May raise an Exception in engine.py and restart the search work?Wish add an Exception.

nershman commented 2 years ago

What happens generally is they detect unusual activity and redirect you to a captcha completion page. If you want to scrape you will need to write something to detect that and then manually complete the captcha or forward it through a completion service.

When I ran into this problem I had to manually loading pages from my script into my browser and saving them, then extracting data I needed from the saved HTML. Despite the large amount of request I made, I was able to access the pages. I don't know what information Google tracks that kept them from detecting when I did the same behavior manually...

Depending on the amount of data you are collecting it might be better to just use a service which will scrape data for you. There are services out there which have more sophisticated algorithms for avoiding detection, or even use captcha solving services.