SGrondin / bottleneck

Job scheduler and rate limiter, supports Clustering
MIT License
1.82k stars 75 forks source link

Bottleneck introduces ~0.5 second latency to jobs even when the queue is empty #144

Open mjcrouch opened 4 years ago

mjcrouch commented 4 years ago

I'm currently working on a vscode extension that has previously had Bottleneck implemented (before I picked it up), in order to prevent many concurrent commands from overloading the server. The job it runs uses child_process.exec() to execute a shell command.

However, I'm finding that bottleneck is introducing a large delay to jobs when compared to running the function directly - of the order 300 - 800ms. The queue is empty when the first call to schedule takes place and has been for some time.

Is this expected? Is there something wrong with the way the API is being used by the extension?

I have tried with various bottleneck settings. My assumption is that when initialised with the default settings, i.e. unlimited maxConcurrent and zero minTime, bottleneck should be doing virtually nothing, just executing jobs immediately.

I have pasted the log output the full life time of a single job, that was started with an empty queue. I have added console.time calls for each stage of the life cycle, all of them starting from the point that limiter.schedule is called. You can see job #19 just starting toward the end but otherwise nothing else intefering with it.

This is the code used - (look for limiter)

Additionally I have times for

: received: 2.848876953125ms before submit counts: Object RECEIVED 0 QUEUED 0 RUNNING 0 EXECUTING 0 Bottleneck Debug: Event triggered: received : received: **3.392ms** : queued: 12.35498046875ms Bottleneck Debug: Event triggered: queued : queued: **12.705ms** Bottleneck Debug: Draining : scheduled: 791.416015625ms Bottleneck Debug: Drained Bottleneck Debug: Event triggered: empty Bottleneck Debug: Event triggered: scheduled : scheduled: **791.681ms** : executing: 794.2939453125ms : preExec: 795.30712890625ms Bottleneck Debug: Event triggered: executing : executing: **794.672ms** in exec counts: Object RECEIVED 0 QUEUED 0 RUNNING 0 EXECUTING 1 : preExec: **795.472ms** : inExec: **46.279052734375ms** 18:login:-s: 842.366943359375ms : received: 1.0830078125ms : done: 846.258056640625ms : inExec: 46.395ms 18:login:-s: **842.493ms**

(Note that there is a weird issue with the way vscode logs times so you get duplicates in there, but I don't think it is significant here) If I call my job function directly or call it with setTimeout(..., 0), instead of using limiter.schedule, the overhead is about 1-2ms instead of x00ms. If I use setTimeout(...100) instead of limiter I get the callback with very little overhead other than that 100ms, so I don't think there's anything else in the event queue that's taking all the time.

If there's nothing obvious I will probably try an experiment with a barebones project to see if something else in the codebase could be having an effect.

If it's relevant, the extension is packed with webpack (though I don't think it was any faster before webpack) and the OS is windows 10 (I don't think it's any faster on linux)