Closed DiameterEffect closed 3 years ago
1) install snscrape and tweepy libraries: pip3 install snscrape pip install tweepy
2) download this repository and run main
- install snscrape and tweepy libraries: pip3 install snscrape pip install tweepy
- download this repository and run main
this worked! but when i put in the keyword i want in the "keyword_elections" and run main, it just gives me "[]" and an excel file to open with. Anyway way to fix this? or am I doing something wrong?
Do you have those files in folder 20191126_20191130 (as long as you are using same keywords and time interval as me)?
Do you have those files in folder 20191126_20191130 (as long as you are using same keywords and time interval as me)?
Yes i see them
so if tweetsids... file contains all ids then snscrape is working correctly. You should do some debug on twitter_api_caller method... can you copy your output here?
twitter_api_caller
where do i find this twitter_api_caller ?(googled it and nothing)
You can find that method in snsscrape_tweepy.py file. You should see if it returns some error or if api.statuses_lookup(batch, tweet_mode="extended") returns correctly a list of tweets. If you don't give more detail I can't tell you what's wrong.
Not sure if im suppose to run(but I did here in this picutre) the snsscrape_tweepy.py, but i did anyway and it didnt give me no error. The problem is when i delete the keywords,Trump,biden,four_seasons_total_landscaping, and kamala_harris, and enter something like this, it gives me this error.It never did this before, but now it gives me empty text files like these here as shown in this picture, but when i keep the Original keywords(Trump,biden,four_seasons_total_landscaping, and kamala_harris,)the text files are full.Is this snscrape program suppose to acrhive a persons twitter profile(with everything) and give you a csv file? Or does it just scrape comments from those keywords you enter?
you have to follow those steps (you got that error because you skipped steps 2):
1) install snscrape and tweepy libraries:
pip3 install snscrape
pip install tweepy
2) create a twitter app, get OAuth consumer and token and copy them into "twitter_auth_data.json" (https://iag.me/socialmedia/how-to-create-a-twitter-app-in-8-easy-steps/):
"consumer_key": "",
"consumer_secret": "",
"access_token":"",
"access_token_secret":""
3) Change values as you like in main.py and then run it.
you have to follow those steps (you got that error because you skipped steps 2):
- install snscrape and tweepy libraries:
pip3 install snscrape pip install tweepy
- create a twitter app, get OAuth consumer and token and copy them into "twitter_auth_data.json" (https://iag.me/socialmedia/how-to-create-a-twitter-app-in-8-easy-steps/):
"consumer_key": "", "consumer_secret": "", "access_token":"", "access_token_secret":""
- Change values as you like in main.py and then run it.
alright i created a twitter developer account and now I am waiting for someone to review my application.And with these steps you gave me I will now be able to archive a twitter user page?
yes, you have to use following command in main.py and put users you want to scrape in user_elections.txt :
# load txt file containing a list of users
# keyword_user_search_param = 'user' to scrape an user profile
users_list = open("keyword_lists/user_elections.txt", mode='r', encoding='utf-8').read().splitlines()
fetch_tweets('user', users_list, since, until, lang, batch_size, save_dir, csv_name)
I tried typing
pip3 install snscrape_twitter
but it gives me an error. How do we install this?