Open erights opened 8 years ago
No availability defense is possible at object granularity, since any object may go into an infinite loop anyway, blocking all further progress of that agent/vat/worker/event-loop.
A defense mechanism can exist; browsers already have this mechanism via the "stop script" dialog.
The Reflect.makeIsolatedRealm
function could have a maxTaskTime
argument that defines the maximum time within which a task can run within the new SES realm.
This way if Bill gets stuck in an "infinite" loop, it'll be stopped at some point. Some error callback mechanism may be defined to inform the party who called the confine
which led to the "infinite" loop that Bill was stopped brutally because of an event loop message that took too long.
This could work/be extended to microtasks operations like promises.
This would prevent the risk of literally infinite loops by boxing them in time. It does not prevent Bill from doing a setInterval
and use lots of CPU time uselessly, but at this point, it becomes an event loop message prioritization issue; implementors have long experience in this area, I'd defer to them.
In the absence of promises, this full statement would be true. However, because of the universal availability of promises, Bill can just continually reschedule himself on the promise queue. Even if we were to (somehow) provide the promise library only through the membrane, rather than directly, async functions mean that the builtin promise functionality can still be reached by syntax, making it undeniable.
I think that the problem is that promise scheduling is underspecified. There are no mechanisms in the language that allow controlling and configuring it.
Is the Promise Jobs queue shared between realms?
More specifically - even if we don't expose a Promise.setScheduler
we can still solve this at the platform level by requiring that implementors throttle or deprioritize scheduling of promises in a secure realm. I think that even moving them from mictoask to macrotask semantics (just in this case) would solve this since it would let the "event loop" run in every other realm before running the promise - it would be (almost) completely transparent to promise users too.
@benjamingr I do not understand the suggestion. Could you expand? Explain it in terms of a concrete example? Thanks.
Promise.setScheduler(function(fn) {
// this gets called whenever a callback has to be executed on a promise -
// that is `onFulfilled` or `onRejected` or when `await` continues.
// I can control priority here, or not run functions at all
// hopefully, when I subclass Promise only the scheduling of the subclassed promises gets
// impacted
});
We might want to expose it to users or not. Some userland libraries support this. This is not directly related to this proposal though. I might bring in to esdiscuss.
The actual point was that it's possible to control the promise scheduling through a separate queue to the frozen realm and to prevent the realm from causing an availability issue if we desire.
In the absence of promises, this full statement would be true. However, because of the universal availability of promises, Bill can just continually reschedule himself on the promise queue. Even if we were to (somehow) provide the promise library only through the membrane, rather than directly, async functions mean that the builtin promise functionality can still be reached by syntax, making it undeniable.
The first part of this claim is still true: "nothing the Bill code can do to cause further effects". This is the integrity guarantee. The invalid part "or even to continue to occupy memory" is about resource use, which is about availability. On availability, by rescheduling on the promise queue, this situation is even worse -- Bill can continue to spend cpu resources as well. Nevertheless, this is still consistent with our overall architecture: Protect integrity at object granularity. No availability defense is possible at object granularity, since any object may go into an infinite loop anyway, blocking all further progress of that agent/vat/worker/event-loop.