When I run SCRAPY_PROJECT=sitesAirtable pipenv run scrapy crawl updateSites, I'm getting the error below. Not sure if I just need to be running a different command?
2020-06-24 18:41:26 [scrapy.utils.log] INFO: Scrapy 2.0.1 started (bot: scrapybot)
2020-06-24 18:41:26 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 20.3.0, Python 3.7.5 (default, Nov 7 2019, 10:50:52) - [GCC 8.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1f 31 Mar 2020), cryptography 2.9, Platform Linux-4.15.0-1065-aws-x86_64-with-Ubuntu-18.04-bionic
2020-06-24 18:41:26 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/FbScraper-il7tl8-d/lib/python3.7/site-packages/scrapy/spiderloader.py", line 68, in load
return self._spiders[spider_name]
KeyError: 'updateSites'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/FbScraper-il7tl8-d/bin/scrapy", line 8, in <module>
sys.exit(execute())
File "/root/.local/share/virtualenvs/FbScraper-il7tl8-d/lib/python3.7/site-packages/scrapy/cmdline.py", line 145, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/root/.local/share/virtualenvs/FbScraper-il7tl8-d/lib/python3.7/site-packages/scrapy/cmdline.py", line 99, in _run_print_help
func(*a, **kw)
File "/root/.local/share/virtualenvs/FbScraper-il7tl8-d/lib/python3.7/site-packages/scrapy/cmdline.py", line 153, in _run_command
cmd.run(args, opts)
File "/root/.local/share/virtualenvs/FbScraper-il7tl8-d/lib/python3.7/site-packages/scrapy/commands/crawl.py", line 57, in run
crawl_defer = self.crawler_process.crawl(spname, **opts.spargs)
File "/root/.local/share/virtualenvs/FbScraper-il7tl8-d/lib/python3.7/site-packages/scrapy/crawler.py", line 176, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "/root/.local/share/virtualenvs/FbScraper-il7tl8-d/lib/python3.7/site-packages/scrapy/crawler.py", line 209, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "/root/.local/share/virtualenvs/FbScraper-il7tl8-d/lib/python3.7/site-packages/scrapy/crawler.py", line 213, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
File "/root/.local/share/virtualenvs/FbScraper-il7tl8-d/lib/python3.7/site-packages/scrapy/spiderloader.py", line 70, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: updateSites'
When I run
SCRAPY_PROJECT=sitesAirtable pipenv run scrapy crawl updateSites
, I'm getting the error below. Not sure if I just need to be running a different command?