promises-aplus / unhandled-rejections-spec

Discussion and drafts of possible ways to deal with unhandled rejections, across implementations
6 stars 0 forks source link

Deferred rejection #9

Open icysailor opened 7 years ago

icysailor commented 7 years ago

Hi folks. Interested in feedback on some experimental promises with deferred rejection. Has this been considered as a solution?

The pattern is you create promises with rejection deferred: if they're rejected without a reject handler it will be postponed for later delivery. If a reject handler is added later the rejection is delivered. At the point when you expect your process to have handled the promise, you end the deferral which delivers any still-pending rejections as unhandled.

This makes it the responsibility of the programmer to specify when a rejection is considered unhandled. If you have a use for asynchronously-attached handlers you can do that without the engine getting in the way and without breaking other code by changing global behavior. But you don't completely lose error reporting because you still deliver anything unexpected at the end. You create little pockets of time where asynchronous attachment is valid.

There's a working implementation here. Usage and implementation are simple. It subclasses native Promises: https://github.com/icysailor/promise-deferred-rejection

const { PromiseDeferredRejection } = require('./promise-deferred-rejection');
let resolve, reject;
const promise = new PromiseDeferredRejection((res,rej)=>{
  resolve = res;
  reject = rej;
});
reject('Rejected!'); // No unhandled rejection
setTimeout(()=>{
  promise.catch(reason=>{
    console.log('Handling rejection with reason: '+reason);
  });
},5000); // Rejection handled 5 seconds later
setTimeout(()=>{
  promise.release();
},8000); // Always end deferral in case normal process fails

@MicahZoltu @vkarpov15 Would this work for the advanced use cases you guys have mentioned?

Relevant to the discussion in #6 https://github.com/nodejs/node/issues/830 https://github.com/nodejs/promises/issues/26 https://gist.github.com/benjamingr/0237932cee84712951a2

vkarpov15 commented 7 years ago

Not necessarily because you don't necessarily know when a promise rejection will be handled. One example is sinon.stub() -ing out a function to return a promise - do I need to estimate how long I expect my entire test suite to run?

TBH I think this is really just a problem with node that node itself needs to fix - warning about unhandled rejections by default is silly. It's handy as an educational tool but I really don't think node's primary purpose is to teach kids how to code.

icysailor commented 7 years ago

Good point. Sometimes you want to wait forever.

vkarpov15 commented 7 years ago

Yeah doesn't the halting problem suck?

Mostly, this whole mess just gives me more reason to switch over to using observables exclusively. It's pretty pitiful that node considers creating a really hacky global stream (term used loosely) of unhandled rejections a major feature, life is much easier when you just have an observable of observables (or an observable of promises).

bergus commented 7 years ago

You create little pockets of time where asynchronous attachment is valid.

Interesting idea, but who is responsible for that? The producer (who creates the promise) or the consumer (who awaits it)? It seems it can't be the producer, as he doesn't know for how long the asynchronous attachment should be allowed. But it can't be the consumer either, as he could too easily forget it. Would we need an unreleasedPromiseRejection for that case?

vkarpov15 commented 7 years ago

Nah, should definitely be the consumer. Sure they could forget, but there's always ways to misuse an API and trying to come up with every possible mistake someone can make is a Sisyphean task. More than enough tools out there, co, rxjs, xstream, etc., for globbing together promises to consolidate error handling, it's a waste for core node to add yet another tool.

icysailor commented 7 years ago

The consumer typically. It's a halting problem as @vkarpov15 said so nothing else to do. Software can't decide it so the programmer has to. They just have to do it carefully.

If someone explicitly says "I'm prepared to handle this" I think it's reasonable to believe them and just assume they're competent to do it. Maybe unhandled rejections should still be delivered on garbage collection though ?

The batch helps. It taints the entire tree of derived promises to any depth. Then you can end deferral for the entire batch in one place.

icysailor commented 7 years ago

My use case that led to this is kind of a middleware. I run a procedure provided by the user and give it the ability to start certain asynchronous subprocesses. The procedure may be a generator or async function. It's easy and seemingly valid from a coder point of view to do this:

  1. Start a subprocess.
  2. Await something else.
  3. Await the first subprocess.

If that first subprocess is rejected during step 2 you get an unhandled rejection (and soon in node a crash). I wanted to just give them the chance to attach before procedure end. But there is a definite end to this so after procedure ends I also want to deliver anything they genuinely left unhandled.

The typical advice here is to add a handler to suppress the unhandled rejection. That doesn't work. I've got to know at the end if they didn't handle it. There has to be a period where it's valid, ended by a point where it becomes invalid.

bergus commented 7 years ago

If it's the consumer, then this should be an opt-in. For example:

const promise = doSomething();
const release = promise.supressUnhandledRejection();
// later
promise.then(…).catch(handleErr);
// afterwards
release(); // emits the unhandled rejection event if there was a rejection and it still hasn't been handled

However, you can already do that! Admittedly, you don't have access to the internal [[PromiseIsHandled]] flag, but as the consumer you know whether you did handle the rejection or not:

const promise = doSomething();
let rejectionHandled = true; promise.catch(() => rejectionHandled = false; });
// later
promise.then(…).catch(handleErr);
rejectionHandled = true;
// afterwards
if (!rejectionHandled) process.emit("unhandledRejection", promise); // or whatever

@icysailor It sounds like you are looking for return Promise.all([subProcess(), someThingElse()]). There all rejections are handled immediately, and you have the same control flow otherwise.

icysailor commented 7 years ago

Agreed about the opt in. Was thinking of it as a separate constructor. But maybe a static method would work. So Promise.deferred creates a special-functionality promise the same way Proxy.revocable creates a special-functionality proxy.

A devoted constructor would enable the full set of methods though. So you could do DeferredPromise.resolve(standardPromise) to easily add deferred rejection to any thenable.

It's true that you could manage it externally. But then you have to manage it. It's easier if your workflow is just:

  1. Use a bunch of promises.
  2. Deliver any errors I didn't handle.
icysailor commented 7 years ago

My example above was poor. I was thinking of the scenario where the rejection is handled and so shouldn't be delivered - but the handling happens a tick or 2 after the rejection.

async function myProcedure () {
  const process1 = doThis(); // Fire this off to get it running
  const process2 = doThat();
  const process2Result = await process2;
  /*** CRASH from process1 - even though I've handled it 2 lines down */
  try {
    const process1Result = await process1;
    // .. do something with process1Result
  } catch (e) {
    // Do nothing because a process1 failure is not a problem
  }
}

This seems perfectly reasonable to do. It would never complain in a synchronous function. Yet here it crashes the entire program.

Maybe this is really about async functions. This really hurts the goal of async functions: make it possible to write an async algorithm the same way you would write a synchronous one. It forces the coder to break this into a sequence of steps with boundaries between them (back to the way we had to think of it before). Maybe async functions should defer unhandled rejection of promises they can access.

From the coder's perspective the procedure-that-may-handle-the-rejection is still running. It should have the same effect on rejections that a synchronous function does: it may still be handled so wait and see what happens.

bergus commented 7 years ago

Yeah, an extra promise constructor could do. However, I still think it's unnecessary (at least to be included in the standard, as you demonstrated you already can achieve this by subclassing). "Deliver any errors I didn't handle" is pretty much the default - you'll get an unhandled rejection event. And there is no "easy workflow" where you simply "use a bunch of promises", as you'd need to specifically mark each of those promises. Yes, managing externally is always necessary (though you could imagine a lot of sugar over the crude approach I used above). The whole point of unhandled rejection tracking is to force programmers to handle errors explicitly, and I think that's a good thing. Especially in an asynchronous world.

bergus commented 7 years ago

Regarding your new example, you cannot really compare asynchronous non-sequential code to synchronous code. If this was a synchronous function, your program would indeed crash as soon as doThis() threw an exception, but lets set that aside.

async/await is great for asynchronous sequential code. Everytime you call a promising function, you should await it immediately. If you want things to run concurrently, you basically always have to use Promise.all. Applying that best practise to your code, it would look like this:

async function myProcedure () {
  const process1 = doThis().catch(e => {
    // Do nothing because a process1 failure is not a problem
  });
  const process2 = doThat();
  const [process1Result, process2Result] = await Promise.all([process1, process2]);

  if (process1Result != null) {
    // .. do something with process1Result
  }
}
MicahZoltu commented 7 years ago

I think the Node team needs to be more clear on what their target demographic is before a reasonable decision can be made on how to proceed on this subject. If you (node maintainers) are trying to make something that is "hard to get wrong" for beginners then built-in unhandled rejection warnings/crashing is a reasonable approach. If you are trying to build something that makes it easy for senior engineers to build advanced complex performance critical applications, then built-in unhandled rejection warnings/crashings is not a reasonable approach.

As has been mentioned previously in this thread and others, in advanced applications and libraries it is reasonable and correct to not know when a promise will be handled and it is reasonable and correct for a promise to never be handled! There is no solution that solves both use cases of "easy for beginners" and "easy for experts" because they are in opposition to each other.

Personally, I have always struggled to see JS (and therefore node) as a reasonable language/platform for building advanced applications on due to reasons that are well documented all over the internet. Unfortunately, there aren't any real alternatives in the browser at this time so we are stuck using a language that feels like it is built/designed for beginners (especially with decisions like this) to build large complex applications.

bergus commented 7 years ago

@MicahZoltu It's never correct not to handle possible errors from a promise, except when you know that a promise will never be rejected. Yes, it can happen that you don't know when a rejection is getting handled but only that it will be handled. But even in those cases, senior engineers will want to explicitly mark the promise to ignore errors for now (by simply doing .catch(()=>{/* explaining comment */})), which is much better than implicitly leaving it hanging around. This makes it harder to get wrong for experts as well, where it actually matters even more. And it also is made easy for senior developers to turn off the crashing on unhandled rejections if they really want that.

icysailor commented 7 years ago

@bergus You're right, it's different. You never start a procedure on one line and get the result on another line in a normal function. Whatever operation you called is done when the call returns.

vkarpov15 commented 7 years ago

@bergus so you admit you don't necessarily know when a rejection will get handled, but you claim that you always need a .catch() anyway? Even in cases where the rejection will eventually be handled? That seems excessively pedantic. And, as for not being beginner friendly, in cases like that you need to make sure to return the original promise rather than the one returned from .catch().

bergus commented 7 years ago

@vkarpov15 Any unclosed promise chain is a cause of bugs (either unhandled errors or forgotton asynchrony), so yes, you always need a .catch anyway. You might think that's pedantic, I think that pedantry helps to assert correctness. These cases where rejections are handled asynchronously are so uncommon that you want to mark them explicitly with an empty catch, and whenever you think "oh, I'll handle that later" you should explicitly state that "so here I ignore errors for now because…" in a comment.

A beginner should rarely find himself in such a case. It's an expert scenario. The usual developer will want to treat errors immediately. And no, when you return the original promise, you don't need a .catch() as error handling is the responsibility of your function's caller then.

vkarpov15 commented 7 years ago

I think pedantry wastes my time as an engineer. If I wanted a language that prevented me from doing any meaningful work I'd go back to writing Java.

MicahZoltu commented 7 years ago

I don't want to rehash the debates that occurred in the other threads too much, but the short version is that a fire-and-forget async operation is a reasonable use case but the library that is creating/providing the promise can't know how it will be used ahead of time. As an example, if I am opportunistically writing data to a remote cache I don't need to do any work when it is done. As a user of a library, I will call something like memcached.set(myData) and because I don't care about the response I will just not look at the return value.

The other harder to address use case is when you want to pre-warm some data in your application. For example, you may kick off an async request to a remote resource while the user is loading the page, because you suspect that they may later click a UI element that requires the data from that resource. Because you are pre-fetching data, there is no appropriate place to display the error to the user as the user hasn't asked for the data yet. In this case, you will never attach a rejection handler if the user never requests the data, or you may attach a rejection handler at some distant point in the future, after the user has actually requested it. In this case, it would be incorrect to opportunistically attach a rejection handler before the user requests the data because you need to not handle the error so if the user does ask for the resource you can attach a handler which would immediately fire the exception handling (assuming the promise was already fulfilled at that time).

There are workarounds for these scenarios, like storing the promise in a variable and then attaching a catch to that variable and then returning the original variable. However, in promise heavy code this very quickly destroys readability as now every line of code can turn into a mess of unhandled rejection disabling. This is why I argue that the Node maintainers need to decide whether they want to support the trivial use case or support complex promise heavy applications that do things like fire-and-forget, pre-fetching, lots of promises being passed around, etc.

I recognize that promise based asynchronous programming is a different paradigm than what people are used to with synchronous programming, but it can be an incredibly powerful tool if wielded well. The unhandled rejection stuff is catering to an audience that isn't wielding it to its full potential and are stuck thinking of their code as synchronous code, which its not.

vkarpov15 commented 7 years ago

Great points :+1: