This affects the JSON /search/:search_term endpoint.
Right now we crawl the first page of results from each crawler in parallel.
But if one crawler has 50 torrents we crawl those sequentially. I believe what needs to happen here is we need to grab those 50 urls in search page 1, chunk them and async stream/process them 10 at a time.
I'll figure this out tomorrow.
End result should be a much faster on demand search. We need this otherwise the product sucks.
This affects the JSON
/search/:search_term
endpoint.Right now we crawl the first page of results from each crawler in parallel.
But if one crawler has 50 torrents we crawl those sequentially. I believe what needs to happen here is we need to grab those 50 urls in search page 1, chunk them and async stream/process them 10 at a time.
I'll figure this out tomorrow.
End result should be a much faster on demand search. We need this otherwise the product sucks.