zero0mum / cartoon_crawler

爬取百年,漫画呗,古风三个盗版漫画网站,爬取图片地址,可下载或导入漫画,生成本地网页在线或离线观看。漫画爬虫阅读网页 预览地址:https://mumu_zero.gitee.io/ Climb to take a hundred years of comics, comics chant, antique three pirate comics website, climb to get the picture address, can download comics, generate local web page online or offline viewing. Cartoon crawler reading page preview address: https://mumu_zero.gitee.io/
Apache License 2.0
27 stars 4 forks source link

exe #3

Open xxxdbb110 opened 1 month ago

xxxdbb110 commented 1 month ago

您好!第一次使用exe文件,提示:

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "requests\adapters.py", line 667, in send File "urllib3\connectionpool.py", line 799, in urlopen File "urllib3\util\retry.py", line 550, in increment File "urllib3\packages\six.py", line 769, in reraise File "urllib3\connectionpool.py", line 715, in urlopen File "urllib3\connectionpool.py", line 404, in _make_request File "urllib3\connectionpool.py", line 1058, in _validateconn File "urllib3\connection.py", line 419, in connect File "urllib3\util\ssl.py", line 449, in ssl_wrapsocket File "urllib3\util\ssl.py", line 493, in _ssl_wrap_socket_impl File "ssl.py", line 500, in wrap_socket File "ssl.py", line 1040, in _create File "ssl.py", line 1309, in do_handshake urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, '远程主机强迫关闭了一个现有的连接 。', None, 10054, None))

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "漫画爬虫.py", line 102, in File "requests\sessions.py", line 602, in get File "requests\sessions.py", line 589, in request File "requests\sessions.py", line 703, in send File "requests\adapters.py", line 682, in send requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, '远程主机强迫关闭了一个现有的连接。', None, 10054, None)) 报错信息如上,请按下任意键退出...

zero0mum commented 4 weeks ago

是不是挂着vpn呢,关掉vpn试试?

xxxdbb110 commented 3 weeks ago

没有挂VPN

zero0mum commented 2 weeks ago

try: res_url = r.get('https://zero0mum.github.io/web/others/sites.json')#获取网站列表 except: traceback.print_exc() input('报错信息如上,请按下任意键退出...') else: print('拉取成功!')

报错的是代码102行获取网站列表这一行,试试直接用浏览器访问github这个静态网页托管的链接看看有问题么:https://zero0mum.github.io/web/others/sites.json