Open fatcerberus opened 8 years ago
Fairness is a very difficult issue with job queues in general. As an initial implementation I'd probably want to go with a single FIFO which is at least easily understandable. It wouldn't be optimal for cases where jobs have concrete priorities, but a multi-queue model seems unnecessarily difficult as a first implementation.
For the concrete case with the debugger commands, I'm not sure if that's a practical concern? Assuming that the job queue would starve, why would a debug client issue a never ending debug command stream if one assumes no actual code (i.e. job entries) is executed? I'm not saying it's impossible, just that I'm not sure if it's a practical concern.
But in case it's also a practical issue, maybe it would suffice to use a simple rate limiting mechanism to cap the rate of messages processed in a non-paused state. The executor already does this, only checking for debug messages once every 200ms or so (very roughly, and maybe I remember the exact millisecond number wrong :-), so it wouldn't be a major change in how the debugger works in a running state.
As discussed above, I'm not entirely happy with the basic queue-job/process-job design because it needs application co-operation. In particular, if an application uses Promises but doesn't call duk_process_job()
or whatever, Promises won't work and job queue will just grow without limit. It also won't be immediately obvious why things don't work.
So overall I can see four options:
DUK_HEAP_SUPPORT_JOBS
.Does ES6 (or later) have anything to say about the engine's responsibility w.r.t. timing of job processing?
I haven't read through it all in detail but I'd be surprised if it did, other than saying they should execute from an "empty call stack" (and maybe "without delay", though a requirement like that would necessarily be very vague). But I'll check that when I actually start work on the job queue.
Hi all, been using duktape for a while in a home automation project. It provides a scripting environment to plugins for the system.
I currently have Promise support implemented via Babel polyfill: all that was necessary was an implementation of setTimeout/setImmediate. The duktape embedder needs to provide this in the global scope, and that becomes the job queue that has been discussed.
Babel also provides support for await/async and generators via transforms, in fact most (all?) of the recent ecmascript features work after transpile.
@koush I'm trying to achieve the same thing, can you share your bable config? I can't get Promises & async/await polyfill-ed properly
You can try bluebirdjs, but you need to implement your own setTimeout or process.nextTick.
Promise = require('bluebird');
Promise.setScheduler(function(fn) {
process.nextTick(fn);
//setTimeout(fn, 0);
});
In low.js (complete Node.JS implementation based on DukTape, especially for microcontrollers with less resources) I just implemented a complete Promise implementation in C with the DukTape API, see
https://github.com/neonious/lowjs/blob/master/src/low_promise.cpp
It has almost no additional dependencies, notably it uses low_call_next_tick, our native C implementation of process.nextTick.
There are some interesting challenges for an Ecmascript engine to support Promises.
Promise.prototype.done()
, but these have their own issues and are sometimes controversial. For example bluebird heavily discourages use of.done()
: http://bluebirdjs.com/docs/api/done.htmlThis issue is for discussion of the above challenges and the best options for solving them in a small-footprint engine such as Duktape.