Hello, I'm using scrapyrt to provide an HTTP interface to a big Scrapy project. I'm running a single scrapyrt instance in a Docker container, some spiders require ~60-120 seconds to complete and I've noticed that requests are handled sequentially, causing substantial delays.
Is that the expected behavior? I know scrapyrt is not suitable for long running spiders, but I'm wondering if there exist a quick fix, for example running multiple workers/threads. Asking here cause I'm not really familiar with the twisted framework.
Another solution would be running multiple scrapyrt instances behind a load balancer, but I'd rather not go down that path.
Hello, I'm using scrapyrt to provide an HTTP interface to a big Scrapy project. I'm running a single scrapyrt instance in a Docker container, some spiders require ~60-120 seconds to complete and I've noticed that requests are handled sequentially, causing substantial delays.
Is that the expected behavior? I know scrapyrt is not suitable for long running spiders, but I'm wondering if there exist a quick fix, for example running multiple workers/threads. Asking here cause I'm not really familiar with the twisted framework.
Another solution would be running multiple scrapyrt instances behind a load balancer, but I'd rather not go down that path.