-
Currently, eggs are built and uploaded to scrapyd without their depedencies. This means dependencies have to be installed on the the scrapyd server for the spiders to run . If the scrapyd servers runs…
-
I run it by `docker run -v ~/portia_projects:/app/data/projects:rw -p 9001:9001 scrapinghub/portia` (exactly the same command in Readme)
When I choose Deploy a project, I got this error:
![图片](h…
-
Is there a way I can run some code automatically when a job goes from running to finished in scrapyd? I would like to call an API with the finished job info and then store it in a database without pol…
-
When running scrapyd-deploy on server as well as localhost, I get this error :--
Packing version 1583288151
Deploying to project "scrapy_app" in https://68.xxx.xx.60:6800/addversion.json
Deploy fai…
-
Sentry Issue: [KINGFISHER-COLLECT-M](https://sentry.io/organizations/open-contracting-partnership/issues/2219638425/?referrer=github_integration)
```
OSError: [Errno 24] Too many open files: 'data/me…
-
Hi.
I'm trying to deploy my project to scrapyd-service.
When I'm deploing to local api everything is Ok. But for remote - I've got 0 spiders. But project still successfully created.
> scrapyd-…
-
are there plans to support all of scrapyd's api endpoints with complementary cli commands?
-
When I try to override a simple variable setting (for example IMAGE_STORE) everything works fine but when I try to pass a 'ITEM_PIPELINES' setting it throws an error.
```
2016-12-13T12:30:57+0200…
-
it always happend,when i open the scrapyd service,and after some time ,if i do a request by the api,it raise the exception:requests.exceptions.ReadTimeout. However ,if i restart the service or just pr…
-
Thanks for building this great library! I'm using Scrapyd to deploy a host of scrapers that crawl partner websites each day to email out links to their content.
This question has been asked a coupl…