-
**Describe the bug**
If the ScrapyD Server(s) are running on a remote host (on same VPN) and the ScrapyDWeb is running on a seperate node then the links to Logs and Items become broken by design.
…
-
linux:HTTPConnectionPool(host='192.168.0.24', port=6801): Max retries exceeded with url: /listprojects.json (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connectio…
-
**Problem**
When someone uses the `Cancel` button(s) available in the `/jobs` page:
![image](https://user-images.githubusercontent.com/42189572/159861556-737921d5-aa31-46fe-8f6e-d5c5ff3e5de4.png…
-
1. 直接在web管理界面添加scrapyd主机,并且支持有https连接的scrapyd主机
2. 添加中文语言支持
3. 能够配置各种请求的超时时间与重试次数
-
Hi, first, thanks for building SpiderKeeper - it's really easy to use.
We have some scrapers that utilize SK and we want to automate our deployments with a continuous deployment script. The only ma…
-
都可以在这里交流,我会及时回复的~
也欢迎加入QQ群讨论:389688974
-
When attempting to run spider a notification pops telling me to notify dev team of an unexpected error.
Console:
[28/Jan/2018 23:20:25] "PATCH /api/projects/MayWes/spiders/www.maywes.com HTTP/1…
-
When preparing a crawl with either Words, Titles or Authors the server returns the following error:
```
--- ---
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 134, in mayb…
-
We need a small stand-alone web UI that ties in with the rest components in #24 to visualize the data generated by the cluster. You should also be able to submit API requests to the cluster.
Preferab…
-
## 廖祥森
#### 本周工作
- xposed:按照之前讨论的策略进行sd卡的清理,可以work,清理速度也蛮快的,之前速度太慢和那个号的数据太多可能有很大关系,已经部署到机子上去
- 课程任务:完成了软件工程研究导引的论文,自然语言处理的第一次作业,分布式算法的读书报告
#### 下周工作
- 龚亿通信那边的事不知道要不要开始开发,总之先了解一下sinosteel,考虑一下…