my8100 / scrapydweb

Web app for Scrapyd cluster management, Scrapy log analysis & visualization, Auto packaging, Timer tasks, Monitor & Alert, and Mobile UI. DEMO :point_right:
https://github.com/my8100/files
GNU General Public License v3.0
3.15k stars 563 forks source link

[BUG] Unable to fire crawling via redis on Heroku #135

Open nittolese opened 4 years ago

nittolese commented 4 years ago

Describe the bug I am struggling to make it work as described here https://github.com/my8100/scrapyd-cluster-on-heroku#deploy-and-run-distributed-spiders . Whenever I try to do this: r.lpush('mycrawler:start_urls', 'http://books.toscrape.com', 'http://quotes.toscrape.com') No job is started.

To Reproduce Steps to reproduce the behavior:

  1. Followed the steps in https://github.com/my8100/scrapyd-cluster-on-heroku#deploy-and-run-distributed-spiders on a new Heroku account ( Deploy Heroku apps in the browser )
  2. Tried to fire mycrawler_redis spider from scrapy_redis_demo_project.zip as described in Deploy and run distributed spiders paragraph

Expected behavior I expect that when I use lpush method on 'mycrawler:start_urls', I can fire a crawl job on Heroku.

Screenshots image

Environment (please complete the following information):

Additional context I have sent you a mail with the same title of this ticket with a video attached.

my8100 commented 4 years ago
  1. Make sure you have correctly set up REDIS_HOST, REDIS_PASSWORD, and REDIS_PORT environment variables when deploying the Scrapyd app on Heroku.
  2. Execute pip install -U redis on the PC where you execute the redis commands.
  3. Try the redis commands again.
  4. If still fail: (1) Visit https://dashboard.heroku.com/apps/{your-scrapyd-app}/settings, click the "More" button at the top-right corner, and select "Run console", then post the result of command pip list. (2) Post the result of command pip list on the PC where you execute the redis commands. (3) Post the full log file of your Scrapy job.