Open nittolese opened 4 years ago
pip install -U redis
on the PC where you execute the redis commands.pip list
.
(2) Post the result of command pip list
on the PC where you execute the redis commands.
(3) Post the full log file of your Scrapy job.
Describe the bug I am struggling to make it work as described here https://github.com/my8100/scrapyd-cluster-on-heroku#deploy-and-run-distributed-spiders . Whenever I try to do this:
r.lpush('mycrawler:start_urls', 'http://books.toscrape.com', 'http://quotes.toscrape.com')
No job is started.To Reproduce Steps to reproduce the behavior:
mycrawler_redis
spider from scrapy_redis_demo_project.zip as described in Deploy and run distributed spiders paragraphExpected behavior I expect that when I use
lpush
method on'mycrawler:start_urls'
, I can fire a crawl job on Heroku.Screenshots
Environment (please complete the following information):
Additional context I have sent you a mail with the same title of this ticket with a video attached.