cedoard / snscrape_twitter

Using snscrape and tweepy libraries to scrape unlimited amount of tweets
26 stars 16 forks source link

No guide on how to install this? #2

Closed DiameterEffect closed 3 years ago

DiameterEffect commented 3 years ago

I tried typing pip3 install snscrape_twitter but it gives me an error. How do we install this?

cedoard commented 3 years ago

1) install snscrape and tweepy libraries: pip3 install snscrape pip install tweepy

2) download this repository and run main

DiameterEffect commented 3 years ago
  1. install snscrape and tweepy libraries: pip3 install snscrape pip install tweepy
  2. download this repository and run main

this worked! but when i put in the keyword i want in the "keyword_elections" and run main, it just gives me "[]" and an excel file to open with. Anyway way to fix this? or am I doing something wrong?

cedoard commented 3 years ago

image Do you have those files in folder 20191126_20191130 (as long as you are using same keywords and time interval as me)?

DiameterEffect commented 3 years ago

image Do you have those files in folder 20191126_20191130 (as long as you are using same keywords and time interval as me)?

Yes i see them

cedoard commented 3 years ago

so if tweetsids... file contains all ids then snscrape is working correctly. You should do some debug on twitter_api_caller method... can you copy your output here?

DiameterEffect commented 3 years ago

twitter_api_caller

where do i find this twitter_api_caller ?(googled it and nothing)

cedoard commented 3 years ago

You can find that method in snsscrape_tweepy.py file. You should see if it returns some error or if api.statuses_lookup(batch, tweet_mode="extended") returns correctly a list of tweets. If you don't give more detail I can't tell you what's wrong.

DiameterEffect commented 3 years ago

Not sure if im suppose to run(but I did here in this picutre) the snsscrape_tweepy.py, but i did anyway and it didnt give me no error. The problem is when i delete the keywords,Trump,biden,four_seasons_total_landscaping, and kamala_harris, and enter something like this, it gives me this error.It never did this before, but now it gives me empty text files like these here as shown in this picture, but when i keep the Original keywords(Trump,biden,four_seasons_total_landscaping, and kamala_harris,)the text files are full.Is this snscrape program suppose to acrhive a persons twitter profile(with everything) and give you a csv file? Or does it just scrape comments from those keywords you enter?

cedoard commented 3 years ago

you have to follow those steps (you got that error because you skipped steps 2):

1) install snscrape and tweepy libraries:

pip3 install snscrape
pip install tweepy

2) create a twitter app, get OAuth consumer and token and copy them into "twitter_auth_data.json" (https://iag.me/socialmedia/how-to-create-a-twitter-app-in-8-easy-steps/):

    "consumer_key": "",
    "consumer_secret": "",
    "access_token":"",
    "access_token_secret":""

3) Change values as you like in main.py and then run it.

DiameterEffect commented 3 years ago

you have to follow those steps (you got that error because you skipped steps 2):

  1. install snscrape and tweepy libraries:
pip3 install snscrape
pip install tweepy
  1. create a twitter app, get OAuth consumer and token and copy them into "twitter_auth_data.json" (https://iag.me/socialmedia/how-to-create-a-twitter-app-in-8-easy-steps/):
  "consumer_key": "",
  "consumer_secret": "",
  "access_token":"",
  "access_token_secret":""
  1. Change values as you like in main.py and then run it.

alright i created a twitter developer account and now I am waiting for someone to review my application.And with these steps you gave me I will now be able to archive a twitter user page?

cedoard commented 3 years ago

yes, you have to use following command in main.py and put users you want to scrape in user_elections.txt :

# load txt file containing a list of users
# keyword_user_search_param = 'user' to scrape an user profile
users_list = open("keyword_lists/user_elections.txt", mode='r', encoding='utf-8').read().splitlines()
fetch_tweets('user', users_list, since, until, lang, batch_size, save_dir, csv_name)