I'm trying to rate limit calls to an API, for example, kick off one request every second. Using this library works fine, until I add in retries. The retry is ignoring the reservoir and minTime options, meaning I would potentially be hitting the API more than once per second. I understand the retrying doesn't use the minTime option - I am specifying the same minTime in the failed listener.
Consider the following, very oversimplified, code:
process.on('unhandledRejection', () => {});
const Bottleneck = require('bottleneck');
const bottleneck = new Bottleneck({
minTime: 1000,
reservoir: 1,
reservoirRefreshAmount: 1,
reservoirRefreshInterval: 1000
});
bottleneck.on('failed', (error, jobInfo) => {
if (jobInfo.retryCount === 0) { return 1000; }
});
const job = async i => {
console.log('running', i, new Date());
if (i % 2 === 0) { throw new Error('bang'); }
};
for (let i = 0; i < 10; i++) { bottleneck.schedule(() => job(i)); }
I would expect the retry to happen one second after it failed the first time. I would then expect the next job in the queue to occur one second later. However, I'm seeing the failed job and the next job happening at around the same time.
I understand from the docs that if I were to use maxConcurrent, the next job wouldn't be kicked off if there were too many running, but in real life, I don't care how many are actually running at once, just that they are kicked off one second apart.
Is this expected behaviour? If so, is there any way of making the library work the way I need it to?
Hi!
I'm trying to rate limit calls to an API, for example, kick off one request every second. Using this library works fine, until I add in retries. The retry is ignoring the
reservoir
andminTime
options, meaning I would potentially be hitting the API more than once per second. I understand the retrying doesn't use theminTime
option - I am specifying the sameminTime
in thefailed
listener.Consider the following, very oversimplified, code:
Output:
I would expect the retry to happen one second after it failed the first time. I would then expect the next job in the queue to occur one second later. However, I'm seeing the failed job and the next job happening at around the same time.
I understand from the docs that if I were to use
maxConcurrent
, the next job wouldn't be kicked off if there were too many running, but in real life, I don't care how many are actually running at once, just that they are kicked off one second apart.Is this expected behaviour? If so, is there any way of making the library work the way I need it to?
Thanks!