Closed KhoaTheBest closed 7 years ago
@dangra @eliasdorneles @kmike @breno @zvin
I believe the question is more for @chekunkov and @pawelmhm
have you tried using docker container @kis25791 ? Is using docker possible option for you? There is docker image here: https://github.com/scrapinghub/scrapyrt/blob/master/Dockerfile comment in dockerfile describes how to build and run image as daemon.
Thanks @pawelmhm and @redapple , I will try with docker, then 1 more question: I ran scrapyrt in ubuntu 16.04 with this command "scrapyrt -S ***.settings -p 0.0.0.0 &", then it will be processed in background but after sometime pid for scrapyrt was killed. Do you have any idea about this problem? Should I configure clusters for spider?
then it will be processed in background but after sometime pid for scrapyrt was killed. Do you have any idea about this problem? Should I configure clusters for spider?
@kis25791 can you try logging scrapyrt output to logfile?
> nohup scrapyrt -S ***.settings -p 0.0.0.0 &> log_file.log &
post logfile here it should tell us what's wrong.
@pawelmhm Sorry for very long time no reply. That issue was fine now. Then I have another problem is that my app is using background job with sidekiq to send request to Scrapyrt (about 20 concurrents), and my scrapy code is also used along with scrapinghub/splash to crawl client-side rendering page. But the items I got from response is sometimes empty. Do you know why? Is it because of Scrapyrt limitation in requests or about scrapy code with splash?
Thanks,
hey @kis25791 I added docs for docker https://github.com/scrapinghub/scrapyrt/pull/64 and we added docker image to scrapinghub docker repository so I'll close original issue here. If there is still some other problem please create new issue, thanks!
Hi, How can I run scrapyrt as daemon? Thanks,