-
Stopping jobs mostly works, but it has a number of cases to test.
1. Just created, but not running yet -> remove job/container without stopping it (not tested)
2. Running -> send signal (tested in…
-
**Describe the bug**
I am struggling to make it work as described here https://github.com/my8100/scrapyd-cluster-on-heroku#deploy-and-run-distributed-spiders .
Whenever I try to do this:
```r.lpush…
-
https://github.com/open-contracting/kingfisher-collect/issues/917#issuecomment-1103941041
SInce this version of Process stores the Scrapyd job ID, it's easy to use [scrapy-log-analyzer](https://scr…
-
Hi all:
since the scrapyd server has add the new api of daemonstatus.json
here is the code
https://github.com/scrapy/scrapyd/blob/master/scrapyd/webservice.py
here is the doc
https://scrapyd.re…
-
When having auth enabled, my timer tasks stop working.
The response visible in result is:
![image](https://user-images.githubusercontent.com/49819839/114036358-4cffcb00-9880-11eb-907e-1847cc133aea.p…
-
-
A small number of error-cases is handled (i.e. an error API response is returned).
Add a default error handler that returns what scrapyd would return on an error.
And handle more cases with helpful …
-
**Describe the bug**
启用异步的TWISTED_REACTOR时候,部署就会报错
**Traceback**
Traceback (most recent call last):
File "D:\anaconda\envs\scrapy\lib\site-packages\twisted\web\http.py", line 2369, in …
-
Hello,
It should be nice to have a button to be able to download a log file, viewing the log in the browser can cause problems for big log file.
The log can be download from scrapyd but it also …
-
This line https://github.com/holgerd77/django-dynamic-scraper/blob/master/dynamic_scraper/utils/task_utils.py#L32