Open im-n1 opened 5 years ago
1.2.0 gives me tons of issues also, thanks for the tip to downgrade. Everything seems to work perfectly fine on 1.1.0
i'm using version 1.1.0 but still have some error on create url. this problem is related with new update on twitter?
No I have some code running perfectly right now. What's your query tweets look like?
ther query used is '#BTS OR #방탄' and here is code
`import datetime
from twitterscraper import query_tweets
if name == 'main':
begin = datetime.datetime(2019, 7, 2)
end = datetime.datetime(2019, 7, 3)
list_of_tweets = query_tweets("#BTS OR #방탄", begindate=begin, enddate=end)
# print the retrieved tweets to the screen:
for tweet in list_of_tweets:
print(tweet)
`
and i print out url but when i click that link it does not search anything
oh i missed the url is correct but it still can't scrap the tweet
You have to do print(tweet.text) to see the actual tweet
uhmm..... i expect to print just object of tweet but it can't like this
It's not getting any tweets for some reason. Maybe try a different query or something. I literally have a script running right now which is collecting tweets so I'm not really sure what the issue is caused by.
i change the query to Trump Or Clinton like readme.txt in this repo. but only 10 tweet, i scrape.
i think as twitter update webpage. to scrape tweet we have to add some action to scroll down i think..
It should already be scrolling down
yes your right. it's my miss. after change my ip using vpn it's work well. thank you for attention about this problem
Yeah I'm guessing that twitter blocks your ip if you make too many requests because I'm not getting any tweets anymore. I'll try the vpn
storm, I have the same error. I think it is the ip
Try a VPN and slowing down your requests
do you kno how many requests is proper to prevent ip block or the way to check the response is ip block? my goal is collect all tweet form 2006 to now
All tweets? Or all tweets matching a particular search? I usually hit it with poolsize=2 and 2 second delay between each request if I'm collecting all tweets from 2006 matching a particular search. This took around 40 hours to complete, so I ran it on an AWS EC2 instance instead of my own computer. Also, you should probably email me instead of commenting in this thread
2 second delay between each request
How do I put a throttling into twitterscraper? From what I know it creates and spawns all the request automatically.
Thank you all for the feedback. Version 1.2.0 has indeed been a bad release with a typo which was breaking down twitterscraper and in addition fake_useragent was also causing some problems.
Both of these problems have been solved in 1.3.0 and this version is also using of proxy server while making requests.
Please let me know if version 1.3.0 solves your problems.
I just uninstalled 1.2.0 and installed 1.1.0 (with Python 3.7) yesterday and so far it is working fine for me for searching for specific words. Wondering about advanced search, though? I know it used to be possible to simply copy and paste part of the link from a Twitter advanced search into terminal to get twitterscraper to run that search, but it seems this no longer works?
tried every version of twitterscraper but the issue remains the same...even used vpn to do it but zero tweets are scrapped and it crashes....it was working properly two days before. Any solution?
from twitterscraper import query_tweets import datetime as dt import pandas as pd
begin_date=dt.date(2019,3,4) end_date=dt.date(2019,8,16)
limit=500 lang='english'
tweets=query_tweets('@HamidMirPAK',begindate=begin_date,enddate=end_date,limit=limit,lang=lang) df=pd.DataFrame(t.dict for t in tweets) df1 = df[['text']] export_csv = df1.to_csv (r'C:/Users/usama/Desktop/123.csv', index = None, header=True)
I do call twitter scraping like
where:
kw_list
is just a list of one or more keywords (I use "#dash")begindate
isdatetime.date
instancelang
isen
limit
is NoneI'm getting this massive exception: https://paste.ofcode.org/YzhKbecE6hWVaSyBaaXpwu When I downgrade to 1.1.0 everything is OK.
P.S. This case is from https://github.com/im-n1/karpet library from tests. Feel free to clone the repo and run
pytest -v
.