Closed Menduist closed 2 years ago
With current implementation of timers inside chronos
this implementation become very ineffective because its going to create a lot of timers while under pressure. So it become not a solution for effective rate limiting but an issue.
Ok, for reference a similar implementation is already used in nimbus: https://github.com/status-im/nimbus-eth2/blob/f124f22f108f64416f890297560cae8e6c136d08/beacon_chain/networking/eth2_network.nim#L428
I'll switch here to a queue of waiting futures, which will allow a single timer per bucket (and replenish to work properly)
Its possible, but in chronos we trying to make generic solution, which can be used not only for request rate-limiting, but also for network traffic shaping.
Pushed new version.
Should we cap the number of pendingRequests
, and throw an exception on consume
if that's too big?
Did a simple benchmark to see if we have low hanging fruits:
import std/random
proc zzz {.async.} =
let tokenBucket = TokenBucket.new(1000)
while true:
let toConsume = rand(1000)
await tokenBucket.consume(toConsume)
for _ in 0..10000:
asyncSpawn zzz()
waitFor(sleepAsync(20.seconds))
callgrind:
The vast majority of the time is in the GC, with sleepAsync
(and thus the timer sorting) taking only 1.27% of the time, that's seems really low (ie the GC seems damn slow), maybe I did something wrong
A regular token bucket
I added a manual replenish as @arnetheduck suggested