Open atomicthumbs opened 4 years ago
Thank you for the kind words! I do plan on eventually supporting interactivity via "Log in with Twitter", so users can let the server use the API on their behalf. This is a ton of work however, and is probably not going to happen within the next many months. With that said, a Nitter account system is coming soon-ish that will let you login and follow Twitter profiles, giving you a timeline equivalent to the one Twitter serves, minus all the sponsored tweets.
Another way to do this already is to create a Twitter list of all the accounts you follow, then access it like this: https://nitter.net/username/lists/list (eg. https://nitter.net/NASA/lists/astronauts). Then enable infinite scrolling in the preferences menu, and it should be a decent experience.
Another way to do this already is to create a Twitter list of all the accounts you follow, then access it like this: https://nitter.net/username/lists/list (eg. https://nitter.net/NASA/lists/astronauts). Then enable infinite scrolling in the preferences menu, and it should be a decent experience.
Yes, you can add all your Twitter account's followes to a list... and then browse the list through Nitter... But what to do when you have over 1000 accounts that you follow, and you don't want to add them all to your list manually? It would take forever to do manually. Here's a working solution!
ID 118735934893
ID 2758947594
ID 8943573498
ID 98848584
ID 992382
And then find and replace ID
with nothing, leaving only the numbers.
118735934893
2758947594
8943573498
98848584
992382
Then in your editor you find \n
and replace with ,
Now your list should be:
118735934893,2758947594,8943573498,98848584,992382
account_ids = [118735934893,2758947594,8943573498,98848584,992382]
Make sure that the format of the array is correct.Now copy this end of the file:
import time
# replace the userId key's value with this: '+ str(account_ids[i]) +'
c = 1
for i in range(len(account_ids)):
if c % 20 == 0:
time.sleep(120)
# data
# response
print("progress: " + str(c) + "/" + str(len(account_ids)))
c = c + 1
14. Move the **data** and **response** variables inside the _for loop_. Then replace the data variable's userId string with `'+ str(account_ids[i]) +'` Here's how I did it:
https://upload.vaa.red/tQW29#3247aac1b7ca4d836348867b4964cdd3
15. Now your final _main.py_ should be something like this:
account_ids = [88226611137,11111118583,133333337,2222222222284,858585858]
import requests
cookies = { 'gt': '349782432436573894658734', 'dnt': '1', 'kdt': 'f3Jygj234234JsJjsJJ', 'remember_checked_on': '1', 'eu_cn': '1', 'personalization_id': 'v1_fj3jsjhi3nnvMdksh==', 'guest_id': 'v1%A834u589zFxDA4eq4', 'ct0': '0ce73454589579384', '_twitter_sess': 'xZv8y3PLGe3VrnqJ8ANzFxDA4eqzFxDA4eqhy53%252FtyAToMY3NyZl9p%xZv8y3PLGzFxDA4eqyqBAh%3fw3fr3wfr--f75f6a68bee4184fc599128763239c2a348ac717', 'ads_prefs': 'HrJFCJNDAA=', 'twid': 'u%cYpChGreQ6bxxLbT6jsHs3AS', 'auth_token': '8QYzS94rzc8k29aL3mnvUmBd8QYzS94rzc8k29aL3mnvUmBd', 'lang': 'en', 'external_referer': 'Kpdm854xr7EzFxDA4eqdQ%3D|0|6Q9uFetuzFxDA4eqYh%3D', }
headers = { 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0', 'Accept': '/', 'Accept-Language': 'en-US,en;q=0.5', 'content-type': 'application/json', 'authorization': 'Bearer AAAAAAAAAAAAAAAAAAAX6e3m3sNKb43QzqFqtMDzFxDA4eqzFxDA4eqbvZ59gZuzPgdkj', 'x-twitter-auth-type': 'OAuth2Session', 'x-twitter-client-language': 'en', 'x-twitter-active-user': 'yes', 'x-csrf-token': '0ce38rXHQLSt53z7wd7znBZB8XU', 'Origin': 'https://twitter.com', 'DNT': '1', 'Connection': 'keep-alive', 'Referer': 'https://twitter.com/', 'TE': 'Trailers', }
import time
c = 1 for i in range(len(account_ids)): if c % 20 == 0: time.sleep(120)
data = '{"variables":"{\"listId\":\"8555552837429234\",\"userId\":\"'+ str(account_ids[i]) +'\"}","queryId":"kdEK3tmg-zFxDA4eq"}' response = requests.post('https://api.twitter.com/graphql/kdEK3tmg-zFxDA4eq/ListAddMember', headers=headers, cookies=cookies, data=data) print("progress: " + str(c) + "/" + str(len(account_ids))) c = c + 1
16. Install pip and requests module (if you already don't have them):
`$ pip install requests`
17. Run the python script:
`$ python main.py`
Voilà! Now sit back and let the script add all your followers to your list. Note that every 20th request, the script sleeps 120 seconds. This is because Twitter will block your requests if you do more than that. So if you have 100 followers, it will take 10 minutes. I have over 1000 followers and it successfully added all of them to my list. It's a bit hacky I guess, but hey, it works! Now I can just go to Nitter to browse my Twitter feed, no need to log in or use my Twitter account! :-)
Is there any update on this? It's been a year.
Thanks for bumping, I recently made an RSS bot using their api, perhaps something similar could be done with this? Nitter would have to integrate with Twitter as an app in order to get the user's permissions.
+1 for this.
I'm also interested in the status of a "login via Twitter" option, to be able to make posts and interact from the alternative frontend. Given what's happening with Twitter right now and the liminal state, this could be a really interesting option for people right now (unless twitter breaks first)
month-ish is becoming year-ish my friend! very interested in the login feature
Any updates on this?
month-ish is becoming year-ish my friend! very interested in the login feature
same
Someone just introduced me to Nitter, after I complained about Twitter sunsetting their legacy web interface and chastising me for using an extension to trick them into serving it.
I used it for a little while, and it appears to be a perfectly designed tool. The UI represents my ideal of a website interface. It's lovely.
Unfortunately, I'm a heavy Twitter user, and privacy is not my first priority (they've already got their hooks in me and my data). I'm completely addicted to the algorithmic timeline. I'm trapped in their Skinner box, and the lever I have to push for my food pellet is getting more and more obnoxious.
Are there any long-term plans to allow me to run my own Nitter instance as a fully functional interposer between Twitter and my web browser? I could see it working through screen scraping; pretend to be my web browser, logged in as me; peel the tweets out of their garbage interface and serve them to me cleanly. They'd never know.
Not an essential thing, by any means, especially if adding the option to do it would be incompatible with Nitter's core tenets. Just something that would improve my online life immensely. I use an old Thinkpad X220, browse with lots of tabs, and Twitter makes my computer get hot.