I'm currently working on a vscode extension that has previously had Bottleneck implemented (before I picked it up), in order to prevent many concurrent commands from overloading the server. The job it runs uses child_process.exec() to execute a shell command.
However, I'm finding that bottleneck is introducing a large delay to jobs when compared to running the function directly - of the order 300 - 800ms. The queue is empty when the first call to schedule takes place and has been for some time.
Is this expected? Is there something wrong with the way the API is being used by the extension?
I have tried with various bottleneck settings. My assumption is that when initialised with the default settings, i.e. unlimited maxConcurrent and zero minTime, bottleneck should be doing virtually nothing, just executing jobs immediately.
I have pasted the log output the full life time of a single job, that was started with an empty queue. I have added console.time calls for each stage of the life cycle, all of them starting from the point that limiter.schedule is called. You can see job #19 just starting toward the end but otherwise nothing else intefering with it.
preExec - which is the time between calling limiter.schedule and the start of the action function
inExec - which is the time between starting the actual function call and the callback from the async function it uses (i.e. when the done callback is called by my function)
the last one for the plain id by itself, which is the total time from calling to completion
(Note that there is a weird issue with the way vscode logs times so you get duplicates in there, but I don't think it is significant here)
If I call my job function directly or call it with setTimeout(..., 0), instead of using limiter.schedule, the overhead is about 1-2ms instead of x00ms. If I use setTimeout(...100) instead of limiter I get the callback with very little overhead other than that 100ms, so I don't think there's anything else in the event queue that's taking all the time.
If there's nothing obvious I will probably try an experiment with a barebones project to see if something else in the codebase could be having an effect.
If it's relevant, the extension is packed with webpack (though I don't think it was any faster before webpack) and the OS is windows 10 (I don't think it's any faster on linux)
I'm currently working on a vscode extension that has previously had Bottleneck implemented (before I picked it up), in order to prevent many concurrent commands from overloading the server. The job it runs uses
child_process.exec()
to execute a shell command.However, I'm finding that bottleneck is introducing a large delay to jobs when compared to running the function directly - of the order 300 - 800ms. The queue is empty when the first call to
schedule
takes place and has been for some time.Is this expected? Is there something wrong with the way the API is being used by the extension?
I have tried with various bottleneck settings. My assumption is that when initialised with the default settings, i.e. unlimited
maxConcurrent
and zerominTime
, bottleneck should be doing virtually nothing, just executing jobs immediately.I have pasted the log output the full life time of a single job, that was started with an empty queue. I have added
console.time
calls for each stage of the life cycle, all of them starting from the point thatlimiter.schedule
is called. You can see job #19 just starting toward the end but otherwise nothing else intefering with it.This is the code used - (look for
limiter
)Additionally I have times for
preExec
- which is the time between callinglimiter.schedule
and the start of the action functioninExec
- which is the time between starting the actual function call and the callback from the async function it uses (i.e. when thedone
callback is called by my function)(Note that there is a weird issue with the way vscode logs times so you get duplicates in there, but I don't think it is significant here) If I call my job function directly or call it with
setTimeout(..., 0)
, instead of usinglimiter.schedule
, the overhead is about 1-2ms instead of x00ms. If I usesetTimeout(...100)
instead of limiter I get the callback with very little overhead other than that 100ms, so I don't think there's anything else in the event queue that's taking all the time.If there's nothing obvious I will probably try an experiment with a barebones project to see if something else in the codebase could be having an effect.
If it's relevant, the extension is packed with webpack (though I don't think it was any faster before webpack) and the OS is windows 10 (I don't think it's any faster on linux)