Closed aston668334 closed 1 year ago
Twitter probably blocked you temporarily because you were sending too many requests. Try again after 15 minutes... (This also happened for me today including in the page twitter.com)
Now with the new version of twitter-scraper I'm able only to scrape a few times successfully and I get:
panic: response status 403 Forbidden: {"code":353,"message":"This request requires a matching csrf cookie and header."}
After getting this error I get (and I have to wait at least 15 minutes for it to disappear):
panic: response status 429 Too Many Requests: {"errors":[{"message":"Rate limit exceeded","code":88}]}
One thing that it is probably problematic is that I'm logging in before each search, maybe a way to maintain the login status even when the code is not running can help. One other thing that I thought is that maybe the login function is sending too many request to get logged in (which I think is not the case)
do not reinitialize the scraper object between requests - it stores all received cookies after authorization
do not reinitialize the scraper object between requests - it stores all received cookies after authorization
For my use case I have to perform a search, analyze the results and then perform another search with different parameters that I have to manually change in the code. I don't see a way to keep the scraper object state in this scenario.
I do agree with you if the code automatically performs many requests in the same execution I don't have to reinitialize the object, but this is not my case.
I haven't seen your code, but it's always possible to save the state within a single program run, in the scraper's global variable for example
if I add scraper.Login, it will return 429 error when using scraper.GetTweets. And I test to lower the 50 tweets limit to 1 or add scraper.WithDelay(5) , still have same problem.