dataabc / weibo-search

获取微博搜索结果信息,搜索即可以是微博关键词搜索,也可以是微博话题搜索
1.72k stars 375 forks source link

KeyError: 'search' #384

Open DedDol opened 1 year ago

DedDol commented 1 year ago

不好意思打擾了,之前看了https://github.com/dataabc/weibo-search/issues/2這個貼,按照方法試了一下途中替換清華源時直接錯誤,再添加pip install weibo仍然顯示錯誤。之後換了個思路把大神的整個weibo文件夾換了上去得到了KeyError: 'search'這個問題想請問一下怎麼解決。 C:\Users\18681\weibo>scrapy crawl search -s JOBDIR=crawls/search 2023-06-14 04:33:50 [scrapy.utils.log] INFO: Scrapy 2.9.0 started (bot: weibo) 2023-06-14 04:33:50 [scrapy.utils.log] INFO: Versions: lxml 4.9.2.0, libxml2 2.9.12, cssselect 1.2.0, parsel 1.8.1, w3lib 2.1.1, Twisted 22.10.0, Python 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 3.1.1 30 May 2023), cryptography 41.0.1, Platform Windows-10-10.0.22000-SP0 Traceback (most recent call last): File "O:\python\Lib\site-packages\scrapy\spiderloader.py", line 77, in load return self._spiders[spider_name]


KeyError: 'search'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "O:\python\Scripts\scrapy.exe\__main__.py", line 7, in <module>
  File "O:\python\Lib\site-packages\scrapy\cmdline.py", line 158, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "O:\python\Lib\site-packages\scrapy\cmdline.py", line 111, in _run_print_help
    func(*a, **kw)
  File "O:\python\Lib\site-packages\scrapy\cmdline.py", line 166, in _run_command
    cmd.run(args, opts)
  File "O:\python\Lib\site-packages\scrapy\commands\crawl.py", line 23, in run
    crawl_defer = self.crawler_process.crawl(spname, **opts.spargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "O:\python\Lib\site-packages\scrapy\crawler.py", line 239, in crawl
    crawler = self.create_crawler(crawler_or_spidercls)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "O:\python\Lib\site-packages\scrapy\crawler.py", line 273, in create_crawler
    return self._create_crawler(crawler_or_spidercls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "O:\python\Lib\site-packages\scrapy\crawler.py", line 353, in _create_crawler
    spidercls = self.spider_loader.load(spidercls)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "O:\python\Lib\site-packages\scrapy\spiderloader.py", line 79, in load
    raise KeyError(f"Spider not found: {spider_name}")
KeyError: 'Spider not found: search'
dataabc commented 1 year ago

pip是用来安装第三方包的,本程序没有发布pip形式,您pip install weibo是不对的。不能直接使用weibo文件夹,您应该先安装scrapy和依赖的包,然后下载本程序,在weibo-search目录运行命令行。

DedDol commented 1 year ago

感謝大神回復,其實問題是我沒有CD到search.py的那個文件夾中再進行scrapy crawl search 這個命令造成的。已自行解決,也為有同樣困擾的朋友2提個醒,再次感謝回復。