While spawning a few new requests every so ofther is rather lightweight, spawning loads of them, in a constant loop makes this a rather heavy task.
Spawning loads of requests all the time also increases the chances of your access getting rate limited by your ISP, because it is surely flagged as suspicious.
I'd like to suggest that there is a simple system implemented which adds a delay between spawning new requests after X amount of requests fail in a row for a given site.
Ergo.:
we have 2 sites: site1 and site2
rate limit threshold (X) is set to, let's say 15 requests
rate limit timeout (Y) is set to 5 seconds
we spawn 50 requests, 25 vs site1, 25 vs site2
requests for site2 success and we keep spawning new ones
first 10 requests for site2 fail, but the 11th succeeds, we continue spawning new ones
then requests for site2 start failing, we reach X (15) consecutive fails
we mark site2 as rate limited
cancel all current requests for site2
from now on, only one request every Y (5) seconds is made, and until one succeeds site2 remains marked as rate limited
This would increase the load a single client can put on other, still reachable sites, and remove the unnecessary load it's using for unreachable sites.
While spawning a few new requests every so ofther is rather lightweight, spawning loads of them, in a constant loop makes this a rather heavy task.
Spawning loads of requests all the time also increases the chances of your access getting rate limited by your ISP, because it is surely flagged as suspicious.
I'd like to suggest that there is a simple system implemented which adds a delay between spawning new requests after X amount of requests fail in a row for a given site.
Ergo.:
X
) is set to, let's say 15 requestsY
) is set to 5 secondsX
(15) consecutive failsY
(5) seconds is made, and until one succeeds site2 remains marked as rate limitedThis would increase the load a single client can put on other, still reachable sites, and remove the unnecessary load it's using for unreachable sites.