jinnerbichler / crypto_news

Bot for scraping crypto-related news.
1 stars 1 forks source link

tweepy error 400 #1

Closed pigslayer12 closed 6 years ago

pigslayer12 commented 6 years ago

Hello, I can't get rid of this error. I have defined the variables properly in the env. file (I think) and have played around with a few things but I can't solve this error. I have installed all requirments.

Searching google for the error code hasn't been very useful, but from what I can find its a verification error.

Thanks for any help

jinnerbichler commented 6 years ago

Hi,

can you provide an additional error log?

The following environment variables must be set:

TWITTER_API_KEY=XXXXXXXXXXXXXXXXXX
TWITTER_API_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
TWITTER_ACCESS_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
TWITTER_ACCESS_TOKEN_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

You can obtain them on https://apps.twitter.com/

pigslayer12 commented 6 years ago

Thanks for the help. I have quadruple checked my environment variables and they are correct. I even regenerated new keys. Just to make sure, do I include the owner ID and the "-" in the access token? I have tried it both ways but it didn't change anything.

I have not customized the twitter accounts to scrape, but I am pretty confident it doesn't matter as I always end up with a tweepy error 400 or 401. I am getting 401 again, but both should be related verification problems with the API keys, although I think something else is triggering this error as I am basically certain that my API keys/tokens are correct.

Anyways here is the log from my recent attempt

Building scraper Step 1/4 : FROM python:3.5-onbuild Executing 3 build triggers... Step 1/1 : COPY requirements.txt /usr/src/app/ ---> Using cache Step 1/1 : RUN pip install --no-cache-dir -r requirements.txt ---> Using cache Step 1/1 : COPY . /usr/src/app ---> Using cache ---> a90600f678c2 Step 2/4 : MAINTAINER Johannes Innerbichler j.innerbichler@gmail.com ---> Using cache ---> c418ca0f15e1 Step 3/4 : ENV PYTHONPATH . ---> Using cache ---> 47054c52bbf9 Step 4/4 : ENTRYPOINT python manage.py ---> Using cache ---> 5fbc0c981cd8 Successfully built 5fbc0c981cd8 Successfully tagged cryptonewsmaster_scraper:latest cryptonewsmaster_db_1 is up-to-date cryptonewsmaster_scraper_1 is up-to-date Attaching to cryptonewsmaster_db_1, cryptonewsmaster_scraper_1 db_1 | LOG: database system was shut down at 2017-09-01 20:24:38 UTC db_1 | LOG: MultiXact member wraparound protections are now enabled db_1 | LOG: database system is ready to accept connections db_1 | LOG: autovacuum launcher started db_1 | LOG: received smart shutdown request db_1 | LOG: autovacuum launcher shutting down db_1 | LOG: shutting down db_1 | LOG: database system is shut down db_1 | LOG: database system was shut down at 2017-09-02 07:35:05 UTC db_1 | LOG: MultiXact member wraparound protections are now enabled db_1 | LOG: database system is ready to accept connections db_1 | LOG: autovacuum launcher started scraper_1 | Apply database migrations scraper_1 | Operations to perform: scraper_1 | Apply all migrations: admin, auth, contenttypes, sessions scraper_1 | Running migrations: scraper_1 | No migrations to apply. scraper_1 | Your models have changes that are not yet reflected in a migration, and so won't be applied. scraper_1 | Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them. scraper_1 | Starting Django scraper_1 | INFO 2017-09-02 21:23:08,813 news_scraper.management.commands.run_scraper Loading scraper news_scraper.scraper.twitter scraper_1 | INFO 2017-09-02 21:23:08,912 news_scraper.scraper.twitter initialised scraper_1 | INFO 2017-09-02 21:23:08,912 news_scraper.management.commands.run_scraper Loading scraper news_scraper.scraper.twitter scraper_1 | INFO 2017-09-02 21:23:08,914 news_scraper.scraper.twitter initialised scraper_1 | INFO 2017-09-02 21:23:08,914 news_scraper.management.commands.run_scraper Loading scraper news_scraper.scraper.twitter scraper_1 | INFO 2017-09-02 21:23:08,916 news_scraper.scraper.twitter initialised scraper_1 | INFO 2017-09-02 21:23:08,916 news_scraper.management.commands.run_scraper Loading scraper news_scraper.scraper.twitter scraper_1 | INFO 2017-09-02 21:23:08,918 news_scraper.scraper.twitter initialised scraper_1 | INFO 2017-09-02 21:23:08,918 news_scraper.management.commands.run_scraper Loading scraper news_scraper.scraper.twitter scraper_1 | INFO 2017-09-02 21:23:08,920 news_scraper.scraper.twitter initialised scraper_1 | INFO 2017-09-02 21:23:08,920 news_scraper.scraper.twitter start scraping scraper_1 | INFO 2017-09-02 21:23:08,922 news_scraper.scraper.twitter scraping @iotatoken scraper_1 | ERROR 2017-09-02 21:23:09,157 news_scraper.management.commands.run_scraper Error while scraping scraper_1 | Traceback (most recent call last): scraper_1 | File "/usr/src/app/news_scraper/management/commands/run_scraper.py", line 75, in perform_scraping scraper_1 | scraper.scrape() scraper_1 | File "/usr/src/app/news_scraper/scraper/twitter.py", line 40, in scrape scraper_1 | for tweet in cursor.items(limit=50): scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 49, in next scraper_1 | return self.next() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 197, in next scraper_1 | self.current_page = self.page_iterator.next() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 108, in next scraper_1 | data = self.method(max_id=self.max_id, parser=RawParser(), *self.args, self.kargs) scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/binder.py", line 245, in _call scraper_1 | return method.execute() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/binder.py", line 229, in execute scraper_1 | raise TweepError(error_msg, resp, api_code=api_error_code) scraper_1 | tweepy.error.TweepError: Twitter error response: status code = 401 scraper_1 | INFO 2017-09-02 21:23:09,159 news_scraper.scraper.twitter start scraping scraper_1 | INFO 2017-09-02 21:23:09,159 news_scraper.scraper.twitter scraping @bitcoincash scraper_1 | ERROR 2017-09-02 21:23:09,448 news_scraper.management.commands.run_scraper Error while scraping scraper_1 | Traceback (most recent call last): scraper_1 | File "/usr/src/app/news_scraper/management/commands/run_scraper.py", line 75, in perform_scraping scraper_1 | scraper.scrape() scraper_1 | File "/usr/src/app/news_scraper/scraper/twitter.py", line 40, in scrape scraper_1 | for tweet in cursor.items(limit=50): scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 49, in next scraper_1 | return self.next() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 197, in next scraper_1 | self.current_page = self.page_iterator.next() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 108, in next scraper_1 | data = self.method(max_id=self.max_id, parser=RawParser(), *self.args, *self.kargs) scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/binder.py", line 245, in _call scraper_1 | return method.execute() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/binder.py", line 229, in execute scraper_1 | raise TweepError(error_msg, resp, api_code=api_error_code) scraper_1 | tweepy.error.TweepError: Twitter error response: status code = 401 scraper_1 | INFO 2017-09-02 21:23:09,448 news_scraper.scraper.twitter start scraping scraper_1 | INFO 2017-09-02 21:23:09,449 news_scraper.scraper.twitter scraping @eth_classic scraper_1 | ERROR 2017-09-02 21:23:09,744 news_scraper.management.commands.run_scraper Error while scraping scraper_1 | Traceback (most recent call last): scraper_1 | File "/usr/src/app/news_scraper/management/commands/run_scraper.py", line 75, in perform_scraping scraper_1 | scraper.scrape() scraper_1 | File "/usr/src/app/news_scraper/scraper/twitter.py", line 40, in scrape scraper_1 | for tweet in cursor.items(limit=50): scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 49, in next scraper_1 | return self.next() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 197, in next scraper_1 | self.current_page = self.page_iterator.next() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 108, in next scraper_1 | data = self.method(max_id=self.max_id, parser=RawParser(), self.args, self.kargs) scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/binder.py", line 245, in _call scraper_1 | return method.execute() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/binder.py", line 229, in execute scraper_1 | raise TweepError(error_msg, resp, api_code=api_error_code) scraper_1 | tweepy.error.TweepError: Twitter error response: status code = 401 scraper_1 | INFO 2017-09-02 21:23:09,744 news_scraper.scraper.twitter start scraping scraper_1 | INFO 2017-09-02 21:23:09,745 news_scraper.scraper.twitter scraping @modum_io scraper_1 | ERROR 2017-09-02 21:23:10,038 news_scraper.management.commands.run_scraper Error while scraping scraper_1 | Traceback (most recent call last): scraper_1 | File "/usr/src/app/news_scraper/management/commands/run_scraper.py", line 75, in perform_scraping scraper_1 | scraper.scrape() scraper_1 | File "/usr/src/app/news_scraper/scraper/twitter.py", line 40, in scrape scraper_1 | for tweet in cursor.items(limit=50): scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 49, in next scraper_1 | return self.next() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 197, in next scraper_1 | self.current_page = self.page_iterator.next() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 108, in next scraper_1 | data = self.method(max_id=self.max_id, parser=RawParser(), *self.args, *self.kargs) scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/binder.py", line 245, in _call scraper_1 | return method.execute() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/binder.py", line 229, in execute scraper_1 | raise TweepError(error_msg, resp, api_code=api_error_code) scraper_1 | tweepy.error.TweepError: Twitter error response: status code = 401 scraper_1 | INFO 2017-09-02 21:23:10,038 news_scraper.scraper.twitter start scraping scraper_1 | INFO 2017-09-02 21:23:10,039 news_scraper.scraper.twitter scraping @LitecoinProject scraper_1 | ERROR 2017-09-02 21:23:10,304 news_scraper.management.commands.run_scraper Error while scraping scraper_1 | Traceback (most recent call last): scraper_1 | File "/usr/src/app/news_scraper/management/commands/run_scraper.py", line 75, in perform_scraping scraper_1 | scraper.scrape() scraper_1 | File "/usr/src/app/news_scraper/scraper/twitter.py", line 40, in scrape scraper_1 | for tweet in cursor.items(limit=50): scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 49, in next scraper_1 | return self.next() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 197, in next scraper_1 | self.current_page = self.page_iterator.next() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/cursor.py", line 108, in next scraper_1 | data = self.method(max_id=self.max_id, parser=RawParser(), self.args, **self.kargs) scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/binder.py", line 245, in _call scraper_1 | return method.execute() scraper_1 | File "/usr/local/lib/python3.5/site-packages/tweepy/binder.py", line 229, in execute scraper_1 | raise TweepError(error_msg, resp, api_code=api_error_code) scraper_1 | tweepy.error.TweepError: Twitter error response: status code = 401 scraper_1 | INFO 2017-09-02 21:23:10,304 news_scraper.management.commands.run_scraper Scheduling next scraping at 2017-09-02 21:53:10.304440 (UTC)

jinnerbichler commented 6 years ago

Working on my machine