Closed cwolters closed 3 years ago
This is covered in the README, recommending that a queue should be placed in front of the token bucket if you have multiple competing sources or different request sizes. I'm tempted to add an internal queue, but it would increase the scope of this library quite a bit by having to consider max queue length, how to handle queue overruns, etc.
Hi all.
I had a few problems with the TokenBucket and ended up with a fix in calculating the waiting time. Most annoying was, that the TokenBucket did not let pass calls of different token-size-request.
The problem: if different operations work on the same TokenBucket with different number of tokens used, the operations with the higher number will be re-scheduled infinite.
orig:
What we can see here, is that overlapping intervals are not scheduled one after the other and faster timeouts will always eat the lower tokenBuckets from the internal counter. With full load, the self.removeTokens(count, callback) will never reach the token size required for the higher token-requests.
I added a third result value in the callback function (scheduledTokens) to give the user a feedback about how much tokens are already scheduled so that we can react if the number increases.
I ended up with this calculation:
It may break the timing of tests (on my machine they are passing).