dataabc / weibo-search

获取微博搜索结果信息,搜索即可以是微博关键词搜索,也可以是微博话题搜索
1.75k stars 377 forks source link

报错:ModuleNotFoundError: No module named 'weibo.settings'; 'weibo' is not a package #2

Open xm634634 opened 4 years ago

xm634634 commented 4 years ago

运行环境: OS: Manjaro 19.0.2 Kyria Kernel: x86_64 Linux 5.4.30-1-MANJARO WM: i3 CPU: AMD A8-4500M APU with Radeon HD Graphics @ 4x 1.9GHz RAM: 2587MiB / 7401MiB

python:Python 3.8.2 (default, Feb 26 2020, 22:21:03) [GCC 9.2.1 20200130] on linux

已经安装: pip install pillow pip install scrapy pip install weibo

报错详情: Traceback (most recent call last): File "/home/cream/.local/bin/scrapy", line 8, in sys.exit(execute()) File "/home/cream/.local/lib/python3.8/site-packages/scrapy/cmdline.py", line 113, in execute settings = get_project_settings() File "/home/cream/.local/lib/python3.8/site-packages/scrapy/utils/project.py", line 69, in get_project_settings settings.setmodule(settings_module_path, priority='project') File "/home/cream/.local/lib/python3.8/site-packages/scrapy/settings/init.py", line 287, in setmodule module = import_module(module) File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1014, in _gcd_import File "", line 991, in _find_and_load File "", line 970, in _find_and_load_unlocked ModuleNotFoundError: No module named 'weibo.settings'; 'weibo' is not a package

xm634634 commented 4 years ago

更正一下: init.py 中 'init' 前后都有下划线'_' File "" 中 ""里面包含'《frozen importlib._bootstrap》'

dataabc commented 4 years ago

你可以重新创建一个项目,然后用现在的代码替换新生成的文件。

scrapy startproject weibo
cd weibo
scrapy genspider search weibo.com

用现在的文件settings.py、pipelines.py、items.py、utils、search.py替换新生成的文件

xm634634 commented 4 years ago

你可以重新创建一个项目,然后用现在的代码替换新生成的文件。

scrapy startproject weibo
cd weibo
scrapy genspider search weibo.com

用现在的文件settings.py、pipelines.py、items.py、utils、search.py替换新生成的文件

收到,谢谢! 笔记本扔在公司了,明天我试完再反馈。

另外,大大我想学习爬虫,看了《Python网络爬虫权威指南》还是感觉不知从哪下手,请问有什么推荐学习路径和方法吗。

一点建议:

1.weibo-search爬取到数据的同时,开启线程调用weibo-crawler爬取相关数据 2.爬取报错时进行异常处理重启任务

dataabc commented 4 years ago

我本人对爬虫也不专业,因为很多使用者一直提需求才不断修bug、添加新功能。所以也不清楚应该学习哪些技能,不过感觉你可以自己练习写一些项目,知道书中的知识用在哪,如何用,这样记的也更牢。还有就是学习一下这方面比较流行的框架,使用框架比自己从零开始写会省很多事,性能可能也更好,暂时就想到这么多。

第一个建议是说在取到数据后,用weibo-crawler获取对应用户的微博吗?本程序的目的是获取包含关键词的微博,用户的其它微博可能与关键词无关,与本程序目的不符,这个功能应该是可选的,也可以把用户的昵称和user_id保存到文件,供使用者选择是否要获取用户微博,如果要,可以把文件传给weibo-crawler,获取用户微博; 第二个是不错的建议,程序很多地方也都做了异常处理,上面的问题因为没有遇到过,也没有复现错误,所以还不知道如何处理,不知道是否和系统有关。

再次感谢建议,两个建议都非常好,如果还有问题,欢迎继续反馈。

xm634634 commented 4 years ago

你可以重新创建一个项目,然后用现在的代码替换新生成的文件。

scrapy startproject weibo
cd weibo
scrapy genspider search weibo.com

用现在的文件settings.py、pipelines.py、items.py、utils、search.py替换新生成的文件

Error: Module 'weibo' already exists 无奈把weibo替换成weibo1 然后替换.py文件

Traceback (most recent call last): File "/home/cream/.local/bin/scrapy", line 8, in sys.exit(execute()) File "/home/cream/.local/lib/python3.8/site-packages/scrapy/cmdline.py", line 144, in execute cmd.crawler_process = CrawlerProcess(settings) File "/home/cream/.local/lib/python3.8/site-packages/scrapy/crawler.py", line 265, in init super(CrawlerProcess, self).init(settings) File "/home/cream/.local/lib/python3.8/site-packages/scrapy/crawler.py", line 137, in init self.spider_loader = _get_spider_loader(settings) File "/home/cream/.local/lib/python3.8/site-packages/scrapy/crawler.py", line 345, in _get_spider_loader return loader_cls.from_settings(settings.frozencopy()) File "/home/cream/.local/lib/python3.8/site-packages/scrapy/spiderloader.py", line 60, in from_settings return cls(settings) File "/home/cream/.local/lib/python3.8/site-packages/scrapy/spiderloader.py", line 24, in init self._load_all_spiders() File "/home/cream/.local/lib/python3.8/site-packages/scrapy/spiderloader.py", line 46, in _load_all_spiders for module in walk_modules(name): File "/home/cream/.local/lib/python3.8/site-packages/scrapy/utils/misc.py", line 69, in walk_modules mod = import_module(path) File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1014, in _gcd_import File "", line 991, in _find_and_load File "", line 970, in _find_and_load_unlocked ModuleNotFoundError: No module named 'weibo.spiders'; 'weibo' is not a package

xm634634 commented 4 years ago

感觉像是生产环境不干净导致的? 能否完整删除已建立的环境,重新建立环境,只把数据复制过来,然后继续运行项目

dataabc commented 4 years ago

如果只创建项目不替换文件会出错吗

xm634634 commented 4 years ago

如果只创建项目不替换文件会出错吗 创建名为weibo时会出错,Error: Module 'weibo' already exists

xm634634 commented 4 years ago

cd到其他目录后,执行scrapy startproject weibo也是一样

dataabc commented 4 years ago

你应该是在原来目录的子目录执行的,这样确实会出错,删除原来的项目或者这一个完全不相关的目录(不是原来项目的子目录)创建,应该没问题

xm634634 commented 4 years ago

你应该是在原来目录的子目录执行的,这样确实会出错,删除原来的项目或者这一个完全不相关的目录(不是原来项目的子目录)创建,应该没问题

重做系统还是报错,不知道是不是因为3个工具都是放在/home/cream/DataABC目录的原因 Traceback (most recent call last): File "/usr/bin/scrapy", line 8, in sys.exit(execute()) File "/usr/lib/python3.8/site-packages/scrapy/cmdline.py", line 113, in execute settings = get_project_settings() File "/usr/lib/python3.8/site-packages/scrapy/utils/project.py", line 69, in get_project_settings settings.setmodule(settings_module_path, priority='project') File "/usr/lib/python3.8/site-packages/scrapy/settings/init.py", line 287, in setmodule module = import_module(module) File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1014, in _gcd_import File "", line 991, in _find_and_load File "", line 970, in _find_and_load_unlocked ModuleNotFoundError: No module named 'weibo.settings'; 'weibo' is not a package

换到/home/cream/目录下依然报错 scrapy crawl search -s JOBDIR=crawls/search Scrapy 2.0.1 - no active project

Unknown command: crawl

xm634634 commented 4 years ago

scrapy startproject weibo

Error: Module 'weibo' already exists

dataabc commented 4 years ago

Unknown command: crawl可能是因为你没在项目目录下运行,比如weibo-search目录,要在项目目录下执行scrapy crawl search -s JOBDIR=crawls/search。 如果在不相关的目录(和本项目不在一个目录)下scrapy startproject weibo也无法创建成功,不知道是不是系统创建不了项目,如果这样,说明当前环境下scrapy可能有问题。

xm634634 commented 4 years ago

Unknown command: crawl可能是因为你没在项目目录下运行,比如weibo-search目录,要在项目目录下执行scrapy crawl search -s JOBDIR=crawls/search。 如果在不相关的目录(和本项目不在一个目录)下scrapy startproject weibo也无法创建成功,不知道是不是系统创建不了项目,如果这样,说明当前环境下scrapy可能有问题。

不是目录问题,每个目录我都试过,感觉是scrapy的问题。你用的是哪个版本的scrapy呢

dataabc commented 4 years ago

scrapy用的2.0

xm634634 commented 4 years ago

本子扔公司了,然后把08年的笔记本翻出来重建生产环境成功了: sudo pip install scrapy sudo pip install pillow sudo pip install pymysql 按照这个顺序装没问题 下次可以试试 sudo pip install scrapy pillow pymysql tqdm(weibo-crawler需要tqdm)

之前为了方便一条命令撸完: sudo pip install lxml pillow pymysql scrapy tqdm weibo 因为weibo-crawler和weibo-follow的原因需要其他pip,然后强迫症把需要安装的pip排了序

然后在干净系统下通过 git clone https://github.com/dataabc/weibo-search.git 后安装scrapy时发现包含lxml,很有可能是在这里出的问题

dataabc commented 4 years ago

感谢反馈,如果还有其它问题可以继续讨论

xm634634 commented 4 years ago

1.删除pip依赖 sudo pip install scrapy pillow pymysql tqdm lxml weibo 2.设置pip源 sudo pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple 华为源安装scrapy时会报错,改为清华源 3.安装相关依赖 sudo pip install scrapy pillow pymysql tqdm

搞定收工

wyycommu commented 1 year ago

感谢,这个方法奏效,补充一下最后的代码有点小问题,思路是对的。

1.删除pip依赖(这里是uninstall 不是install,另外sudo是为系统里所有用户执行这个操作,没必要,亲测不加sudo也奏效。) pip uninstall scrapy pillow pymysql tqdm lxml weibo 2.设置pip源 pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple 3.重新安装相关依赖 pip install scrapy pillow pymysql tqdm