-
Hello, I have a question about log files. After some time jobs are disappear from scrapyd server and I can't download log from 127.0.0.1:6800/logs/spiders/mhn/79e8c970575511e7a52b742f68d0cfee.log, but…
-
Trying to test celery scheduler with example project (from this repo).
```
$ python manage.py celeryd -l info -B --settings=example_project.settings
$ scrapyd
```
and after:
```
INFO/MainProcess] R…
zeonn updated
7 years ago
-
Hi,
I'm trying to export the scrapyd's output to a json file.
```
marco@pc:~/crawlscrape/urls_listing$ curl http://localhost:6800/listversions.json?project="urls_listing"
{"status": "ok", "versions"…
-
this implementation is completely dependant on the fact that this site es working properly. it should be at least a redundant data source in case site falls the middleware keeps working.
In my part…
-
Hi, guys. I have a problem. When I call function
> self.api.schedule('exa', job.spider.sp_name, query='id=21')
I get empty kwargs, as a result I haven't an argument in my spider. What I made wro…
-
[Scrapyd](https://github.com/scrapy/scrapyd) is an application for deploying and running Scrapy spiders. It enables one to deploy scrapers and control their spiders using a JSON API.
It typically r…
-
I've installed scrapyd with pip. Unexpectly i'm getting error when i'm running "scrapyd" command on terminal.
```
Alican:~ alicanyilmaz$ scrapyd
Traceback (most recent call last):
File "/Library/Fr…
-
```
zeddeMacBook-Air:SpiderKeeper zed$ scrapyd
Traceback (most recent call last):
File "/Users/zed/.pyenv/versions/3.5.1/bin/scrapyd", line 9, in
load_entry_point('scrapyd==1.1.1', 'console…
-
Right now it suppose that the scrapyd-client is always sending the egg to the same url defined in the scrapy.cfg.
But if you have many scrapyd clients and you want to specify the url of your scrapyd …
-
```
^CzeddeMacBook-Air:spider zed$ spiderkeeper --server=http://localhost:6800
/Users/zed/.pyenv/versions/3.5.1/lib/python3.5/site-packages/SpiderKeeper/app/__init__.py:9: ExtDeprecationWarning: Imp…