wrzto / scrapy-wangiyun

爬取网易云音乐全站的音乐信息--(含评论数API)
21 stars 11 forks source link

报错KeyError: 'Spider not found: wangyiyun' #1

Open zmarvin opened 7 years ago

zmarvin commented 7 years ago

哥们,我在你这份代码基础上修改后,出现这个Spider不能找到的错误,我搞了好久也没有解决,能帮我看下吗?这是我fork你代码的URLhttps://github.com/zmarvin/scrapy-wangiyun

wrzto commented 7 years ago

在scrapy.cfg 这个文件的目录下输入 scrapy crawl wangyiyun

zmarvin commented 7 years ago

不行啊,下面是我的log: ubuntu@VM-17-78-ubuntu:~/spiderDir/wangyiyun$ scrapy crawl wangyiyun /home/ubuntu/.local/lib/python2.7/site-packages/scrapy/spiderloader.py:37: RuntimeWarning: Traceback (most recent call last): File "/home/ubuntu/.local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 31, in _load_all_spiders for module in walk_modules(name): File "/home/ubuntu/.local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules submod = import_module(fullpath) File "/usr/lib/python2.7/importlib/init.py", line 37, in import_module import(name) File "/home/ubuntu/spiderDir/wangyiyun/wangyiyun/spiders/music_spider.py", line 10, in import requests ImportError: No module named requests Could not load spiders from module 'wangyiyun.spiders'. Check SPIDER_MODULES setting warnings.warn(msg, RuntimeWarning) 2017-01-17 22:02:22 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: wangyiyun) 2017-01-17 22:02:22 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'wangyiyun.spiders', 'SPIDER_MODULES': ['wangyiyun.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'wangyiyun'} Traceback (most recent call last): File "/home/ubuntu/.local/bin/scrapy", line 11, in sys.exit(execute()) File "/home/ubuntu/.local/lib/python2.7/site-packages/scrapy/cmdline.py", line 142, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "/home/ubuntu/.local/lib/python2.7/site-packages/scrapy/cmdline.py", line 88, in _run_print_help func(*a, kw) File "/home/ubuntu/.local/lib/python2.7/site-packages/scrapy/cmdline.py", line 149, in _run_command cmd.run(args, opts) File "/home/ubuntu/.local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run self.crawler_process.crawl(spname, opts.spargs) File "/home/ubuntu/.local/lib/python2.7/site-packages/scrapy/crawler.py", line 162, in crawl crawler = self.create_crawler(crawler_or_spidercls) File "/home/ubuntu/.local/lib/python2.7/site-packages/scrapy/crawler.py", line 190, in create_crawler return self._create_crawler(crawler_or_spidercls) File "/home/ubuntu/.local/lib/python2.7/site-packages/scrapy/crawler.py", line 194, in _create_crawler spidercls = self.spider_loader.load(spidercls) File "/home/ubuntu/.local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 51, in load raise KeyError("Spider not found: {}".format(spider_name)) KeyError: 'Spider not found: wangyiyun'

Caimocor commented 6 years ago

Hey zmarvin, what you did to solve this problem?