Is your feature request related to a problem? Please describe.
Services often rate limit requests, so if RSS-Bridge makes too many requests too quickly, all subsequent requests from that Bridge will be dropped until the rate-limit lifts. This is partially mitigated by the implementation of the CACHE_TIMEOUT option, which specifies the length of time that a feed stores the previously fetched results before making a new request, but it does nothing to prevent requests from individual feeds from the same bridge being made in quick succession.
Describe the solution you'd like
There should be an option available for bridges that specifies the minimum amount of time between requests from any feed. This can be tuned to the rate limit of any service.
It would function like the following: If a request is outgoing (the respective feed's cache timeout has expired), a separate timer will be set which must expire before a second outgoing request can be made, so if a second outgoing request is received before the timer expires, it must wait in a queue for the time to expire before it's sent out. If many requests are received all at once, they are buffered and sent out individually as the timer cycles.
Describe alternatives you've considered
None.
Additional context
I've encountered this, for example, with the Spotify bridge. If one has many artists to be fetched, they will start to be constantly rate limited.
Is your feature request related to a problem? Please describe. Services often rate limit requests, so if RSS-Bridge makes too many requests too quickly, all subsequent requests from that Bridge will be dropped until the rate-limit lifts. This is partially mitigated by the implementation of the CACHE_TIMEOUT option, which specifies the length of time that a feed stores the previously fetched results before making a new request, but it does nothing to prevent requests from individual feeds from the same bridge being made in quick succession.
Describe the solution you'd like There should be an option available for bridges that specifies the minimum amount of time between requests from any feed. This can be tuned to the rate limit of any service.
It would function like the following: If a request is outgoing (the respective feed's cache timeout has expired), a separate timer will be set which must expire before a second outgoing request can be made, so if a second outgoing request is received before the timer expires, it must wait in a queue for the time to expire before it's sent out. If many requests are received all at once, they are buffered and sent out individually as the timer cycles.
Describe alternatives you've considered None.
Additional context I've encountered this, for example, with the Spotify bridge. If one has many artists to be fetched, they will start to be constantly rate limited.