-
on heroku.com, I set myscrapydweb's Deployment method to "github"(I forkd https://github.com/my8100/scrapydweb to :https://github.com/fightingpjh/scrapyWeb), connect this app to GitHub (https://githu…
-
# 为什么我的 heroku 应用访问 https 时,会 302 跳转到 http,是 heroku 的原因,还是 scrapydweb 的原因呢?
![捕获](https://user-images.githubusercontent.com/49055524/56635306-ba1bf500-6697-11e9-9bbc-475e14a45459.PNG)
-
If I click the stop button to suspend a spider, although the spider have been suspended on the server, but the stop button does not change to "start", it always show "stop", as below pictures show:
!…
-
The barebones web interface of scrapyd can display compressed (eg, .gz) log and item files, while scrapydweb seems not to.
-
Hello sir.
I have two server. scrapydweb and scrapyd is running on 166, 110 only have scrapyd.
but. at 110, the scrapyd always stop after run some times. why ? thinks sir
.
![image](https://user-…
-
I have a crawl running, where after 87000 seconds since last refresh, the following error occures when trying to refresh:
`Traceback (most recent call last):
File "/root/anaconda3/envs/scrapyd_e…
-
Not a bug, just a question. I want to run a python script (as a job), it seems it should be possible to do. You can do this in scrapinghub.
Thanks for the great package!
-
I have many spiders in one project, when I modify the common functions, I need to update timer task one by one to use the latest version. It will be very helpful if timer task can use the latest versi…
-
-
I added the following code in my Spider class to be able to pass the URL as an argument:
```python
def __init__(self, *args, **kwargs):
super(MySpider, self).__init__(*args, **kwargs)
self.sta…