crawlab-team / crawlab

Distributed web crawler admin platform for spiders management regardless of languages and frameworks. 分布式爬虫管理平台,支持任何语言和框架
https://www.crawlab.cn
BSD 3-Clause "New" or "Revised" License
11.38k stars 1.8k forks source link

0.6.1运行Scrapy过程中会停止爬取,一直阻塞,点击取消后,后台恢复爬取 #1259

Closed ykiller2012 closed 1 year ago

ykiller2012 commented 1 year ago

Bug 描述 运行Scrapy时,中途会停止爬取,一直处于阻塞状态,待点击取消按钮后,后台恢复爬取

复现步骤 该 Bug 复现步骤如下

  1. 在Crawlab 0.6.1页面上使用自定义命令python run.py启动Scrapy
  2. 爬取到一半时停止爬取,Scrapy一直处于阻塞状态,停止爬取
  3. 点击取消按钮后,页面上结果数开始继续增加,查看后台进程爬虫恢复爬取

期望结果 xxx 能工作。

截屏 截屏1

tikazyq commented 1 year ago

Duplicated in https://github.com/crawlab-team/crawlab/issues/1248

tikazyq commented 1 year ago

Fixed in https://github.com/crawlab-team/crawlab/pull/1334