nodejs / promise-use-cases

Short lived repository in order to discuss Node.js promise use cases in Collaborator Summit Berlin 2018
50 stars 19 forks source link

Unhandled Rejection Abort Proposal #27

Closed benjamingr closed 5 years ago

benjamingr commented 6 years ago

(Written by @BridgeAR and @benjamingr)

New proposal following gathering user data for the promise-use-cases talk.

Migration & Opt-out Path

We propose an environment variable called UNHANDLED_REJECTION which Node.js will use in order to determine what the behaviour for unhandled rejection is.

The environment variable can take two values abort and warn.

If the user passes another value - Node.js should throw an error. We propose that the default value for the flag is warn in v10 and v8 of Node and abort from v11 on.

An environment variable has the advantage of being backportable to older versions of Node.js and makes it easier for people to upgrade and use the semantics they want. It's easy to opt-in and opt-out of the environment variable.

Actual semantics:

We think that a minimal implementation would be a good idea.

Future Semantics

benjamingr commented 6 years ago

@Fishrock123 @apapirovski @addaleax @ljharb @jasnell @mcollina and other people who have expressed interest PTAL

benjamingr commented 6 years ago

Add a second optional argument to uncaughtException that will allow optionally detecting if the exception originated from a promise.

The possibility of making that a symbol rather than a boolean has been brought up in the PR by @BridgeAR - just posting here so we're aware of it.

ljharb commented 6 years ago

Can we please add a third value for UNHANDLED_REJECTION that does nothing, which is what the spec intends?

benjamingr commented 6 years ago

@ljharb I'm fine with that and want to discuss what it would entail. First of all - this is possible with the current proposal by setting UNHANDLED_REJECTION to warn and adding process.on('unhandledRejection', e => { /* swallow error */ }). Alternatively, you can set it to abort and handle it inside the uncaughtException handler which would work.

Note that while the spec allows this behaviour - Node.js isn't written in a way that allows operation after an unhandled exception has occurred and there are security and stability implications of running Node.js in an inconsistent state.

I realise that this is not a "correctness" thing but rather a technical limitation of the Node.js implementation. I'm also fine as treating it as a violation of how JavaScript runtimes should ideally behave. Sadly, it is the current status and has been since the start of Node.js. Help with this in Node.js core would always be greatly appreciated.

So if you feel very strongly that we should make it easier to set the behaviour to none then it's not a show-stopper for me and I am genuinely OK with it but I don't believe we should make it easier than necessary.

ljharb commented 6 years ago

I’m not sure i understand why a node process can’t continue just fine (in the general case; obviously there’s plenty of cases where the user’s code might have left things in an inconsistent state) when an unhandled exception or rejection occurs. Can you elaborate?

benjamingr commented 6 years ago

@ljharb sure, let's say you have code like this:

async function doSomething() {
  const a = await open('file');
  cosonle.log('hello', a); // note the typo
  a.close(); // this isn't reached
}
doSomething();

The above code produces an unhanldedRejection, you can choose to ignore it and leak memory and this is allowed - but core makes the same assumption and uses this pattern too. For example if a library used (including the standard library (!)) throws then Node is left in an indeterminate state.

This is a part of why we discourage people from using domains and uncaughtException so much:

It is not safe to resume normal operation after 'uncaughtException'.

(from here)

For example, let's say you're using a library that in turn uses Node.js core APIs to do cryptography, we might not release a buffer when we say we do when an exception is thrown. In full honesty I don't believe we test this since there is general consensus in core that recovering globally doesn't work and it's a lot of work to fix.

That might have implications on the security guarantees of apps. Setting UNHANDLED_REJECTION=none might have really complex implications we don't understand well enough. That possibility and our responsibility to our user base makes me very hesitant to make this easy. @bnoordhuis here says "I consider process.on('uncaughtException') a design mistake (I'm not alone in that, its original author does too) so I'd rather not touch it.".

Again, I acknowledge this is more technical debt on our end than "correctness". Again, if you feel strongly about it it's not a show-stopper for me and I'd rather move forward than without. I trust your opinion and the validity of your different perspective.

devsnek commented 6 years ago

core shouldn't be using "kill everything" as a way to clean up resources.

ljharb commented 6 years ago

That's a specific example, sure - and I accept that there are plenty of cases like this. But simply throwing an uncaught exception doesn't automatically leak memory, nor does na unhandled rejection.

In other words, I don't think this is a limitation of node in the general case - only in the specific cases where there's a memory leak (caused by bad code not using finally, of course)

benjamingr commented 6 years ago

@ljharb right, right now the code not correctly "using finally" is Node.js core though. If Node.js itself had the guarantee that you are allowed to recover from uncaught errors then I would agree. At the moment simply throwing an uncaught exception while serving an http request or reading a file stream might cause a memory leak.

At least, that is my understanding. Making opting-out of crashing the process on uncaught exceptions easier is something that I believe we'd need to have the code-base ready for first and agree that we're interested in.

@devsnek

core shouldn't be using "kill everything" as a way to clean up resources.

I agree. If you want to start an effort for making core error recoverable that could be awesome, I think a great first step would be to write code using core APIs that throws exceptions - add an uncahgutException handler and measure and hunt-down memory leaks. If you want to organize an effort I think a lot of collaborators find the premise interesting.

ljharb commented 6 years ago

I'd say that making unhandled rejections kill the process should then probably be blocked on making node core safe for process.exit() to be called at any time (since that's effectively what it's introducing)

benjamingr commented 6 years ago

@ljharb when the process is closed all fds are released and memory is freed by the operating system. That's effectively how Node.js programs have been dealing with uncaught exceptions (and possibly taking a core dump) from day 1. I think it's more analogous to expect to be able to recover from a process.exit call after it has started through a hook.

ljharb commented 6 years ago

Except that an unhandled rejection remains fundamentally different from an uncaught exception, and should not be expected to terminate the process inherently.

benjamingr commented 6 years ago

Note that the JavaScript language also allows running after an uncaught exception and indeed browsers do.

mcollina commented 6 years ago

I think this fundamental to bring Promise experience a first class citizen in Node.js. In order to provide a good user experience Browsers have to do their best to keep running. Companies operating servers built with Node.js are expecting a reliable environment that does not leak memory/file descriptors. Debugging memory leaks and those other situations is hard, while debugging a crash and a programmer error is simple.

Here is a simple example, complex code could have a Promise without a catch deep in the dependency chain, making it extremely hard to discover:

const http = require('http')
const server = http.createServer(handle)

server.listen(3000)

function handle (req, res) {
  doStuff()
    .then((body) => {
      res.end(body)
    })
    // in case of errors, the request will not return a response to the user
    // keeping the memory and system resources occupied
}

function doStuff () {
  if (Math.random() < 0.5) {
    return Promise.reject(new Error('kaboom'))
  }

  return Promise.resolve('hello world')
}

I think the environment variable should be prefixed with NODE_. We should also add a command line flag to match the variable.

I'm definitely +1 in adding a 'none' value.

benjamingr commented 6 years ago

What about adding an UNHANDLED_REJECTION=strict flag that crashes the process on every unhandled rejection that isn't handled synchronously?

benjamingr commented 6 years ago

I'm definitely +1 in adding a 'none' value.

Given your example - why?

mcollina commented 6 years ago

Because I can see why somebody would want that behavior. If we are going for a runtime flag/env variable, it makes sense to have balance.

MicahZoltu commented 6 years ago

@benjamingr Am I understanding correctly that at the moment there are a number of bugs in the nodejs core libraries akin to the example you provided

async function doSomething() {
  const a = await open('file');
  cosonle.log('hello', a); // note the typo
  a.close(); // this isn't reached
}
doSomething();

and at the moment the way nodejs deals with these bugs is to kill the process (or at least annoy the user via error logs)? I believe we all agree that the correct solution to the above example (and any other like it) is a try {} finally {} block?

It further sounds like you are advocating that until such time as NodeJS can clean up all of the bugs like that, your belief is that it is safest to run in the "kill the process" mode, with second safest being "annoy the user" mode?

Assuming all of that is correct, I would like to get some insight into how the team is working towards tackling those issues. If no one is working on it, then I feel like the "kill the process" solution isn't really a reasonable option because it is effectively a permanent solution to the problem and encourages future authors to create similar bugs (or at least doesn't provide strong incentive to fix them). If tackling all of these bugs is a high-priority work item for the NodeJS team that is being actively tackled and is expected to be resolve by version 12 (or some other specific target) then I'm notably more amenable to having the default strategy be "kill the process" (though I still advocate and support letting users choose the none option).

mcollina commented 6 years ago

@MicahZoltu those are not bugs. In Node.js (and many other language), the runtime does not close a file/resource automatically upon error, unless a developers instructs them to with a finally block. The default behavior of most programming runtimes for server or system programming is to exit upon error, not to keep going. Node.js has made the decision long ago that if it exited upon error, which is in line with a significant part of the industry.

On a side note, something similar to what you are describing has been tried with the require('domain') module, but it did not end well: https://nodejs.org/en/docs/guides/domain-postmortem.

benjamingr commented 6 years ago

@MicahZoltu

I believe we all agree that the correct solution to the above example (and any other like it) is a try {} finally {} block?

yes of course.

It further sounds like you are advocating that until such time as NodeJS can clean up all of the bugs like that, your belief is that it is safest to run in the "kill the process" mode, with second safest being "annoy the user" mode?

I believe that the behaviour of Node.js should not differ in this regard between synchronous and asynchronous exceptions. Node.js already exits on uncaughtException and has done so since forever.

If tackling all of these bugs is a high-priority work item for the NodeJS team that is being actively tackled and is expected to be resolve by version 12 (or some other specific target) then I'm notably more amenable to having the default strategy be "kill the process" (though I still advocate and support letting users choose the none option).

We all feel strongly about giving users the choice to do whatever works for them. Note that I suspect in practice unhandled rejections are very rare in production apps (like uncaught exceptions).

benjamingr commented 6 years ago

Also, you are of course welcome to get more involved in the process and work - we always want more people to get involved.

MicahZoltu commented 6 years ago

@mcollina If the code in the provided example is part of NodeJS framework then the bug is that a try/finally wasn't used when the framework did intend to close the resource.

If the example provided is user code, then the bug is that the user didn't wrap the framework calls in a try/finally.

Fundamentally, by offering open without close semantics in the framework we have created an opportunity for users to leak resources. The user in this example could have just as easily not closed the resource and the same leak would be present.

It sounds like the frustration is that the framework does not sufficiently abstract system resource allocation from the user, and while there is definitely merit to that argument (and I would agree) I do not believe that unhandle promise rejection crashing the program will resolve that issue (in the provided example handling the failure at a higher layer would still leak the resource).

I also think that even if it did help reduce the incidence rate of programmer error leading to leaked resources, it isn't worth the loss of functionality in asynchronous engineering.

benjamingr commented 6 years ago

I do not believe that unhandle promise rejection crashing the program will resolve that issue (in the provided example handling the failure at a higher layer would still leak the resource).

It will resolve that issue in that it will align the behaviour with what is happening for synchronous resources and not allow the program to run in an indeterminate state.

it isn't worth the loss of functionality in asynchronous engineering.

In the 5+ years of these discussions I haven't seen one reasonable and legitimate use case which can't be worked around by adding a .catch(() => {}) to a fork - that said, there will be no loss of functionality anyway since it will be overridable anyway (just like uncaughtException).

That said, if you have a compelling and real use case for us where this limits you in some way please do speak up.

MicahZoltu commented 6 years ago

@benjamingr In the example provided the user is not using async/await. If they would have awaited the call to doSomething they would have an uncaught exception and the process would have crashed, as desired.

It seems like the issue is that people find it too easy to accidentally not await an async function, and thus get themselves into an unexpected situation.

As for compelling use cases, they can be worked around by attaching catch handlers but as I have argued in other threads in the past, having to attach catch handlers everywhere is an unreasonable solution (IMO) since it greatly reduces code readability. The problem is that when you use a library, you don't know whether the promise it returns has an ampty catch attached to it or not, so if you are implementing advanced async patterns like prefetching, caching, polling updates, etc. you must attach catch handlers to everything that comes out of a library. Then anyone using your library must do the same thing and the code rapidly becomes unreadable.

Of course, I can also argue that any case of unhandle exception crash can be worked around by fixing the programming errors that caused it. :wink:

We must weigh the protection to programming errors we can provide with this feature against the patterns that become unusable (pragmatically, not theoretically) by following this pattern. As someone who frequently leverages delayed promise handling in code I am, of course, strongly swayed in favor of enabling advanced patterns.

Personally, I think this sort of programming error detection should be done at the linter/static analyzer level. This allows developers to detect the problems in their code, and rely on third party code to detect their own problems. This is a more fine grained solution as it allows a library author to use advanced async patterns while the application developer instead prefers to have a warning when they don't await an async function.

benjamingr commented 6 years ago

As someone who frequently leverages delayed promise handling in code I am, of course, strongly swayed in favor of enabling advanced patterns.

Really, I would love to see some actual code that is written in the way you have described - people come every year or two claiming they write such code but never provide it - Can you show us some use cases where you use deferred evaluation? Based on the last survey I saw that is very rare (we're doing another survey to validate that).

Also, can we please stop calling it "advanced" :)? Thanks.

Personally, I think this sort of programming error detection should be done at the linter/static analyzer level. This allows developers to detect the problems in their code, and rely on third party code to detect their own problems.

I don't think that is technically possible (detecting unhandled rejections statically). Although I'd be interested in tools that help with it.

benjamingr commented 6 years ago

That is, if you're asking Node.js to swallow errors and are saying that it enables "advanced patterns" - please show those patterns so we can have a concrete discussion about it.

To be clear: I am not saying your use case isn't valid - I'm saying we want to actually see the code and learn more about it in order to discuss it.

MicahZoltu commented 6 years ago

The following is a real-world example from a project I'm currently working on (and what brought me once again back to these threads after a bit of a break). The code does have tests that pass, but the project is still young so it may contain bugs (test coverage is mediocre at this point). This is fairly representative of the type of issue I have with the default unhandled rejection handler and I have built larger applications in the past that have this type of pattern throughout the code. I also have followed the same pattern in Java/Kotlin and C# projects in the past (with Futures and Tasks) and in those cases it behaves as I expect (no runtime issues when I ignore or delay processing of a Promise/Future/Task).

export class PolledValue<TScheduledTaskId, TValue> {
    private readonly scheduler: Scheduler<TScheduledTaskId>
    private readonly fetcher: () => Promise<TValue>
    private readonly frequencyInMilliseconds: number

    private lastKnownGood: TValue
    private outstandingFetch: Promise<TValue> | null
    private readonly listeners: Array<(newValue: TValue, oldValue: TValue) => void> = []
    private scheduledTaskId: TScheduledTaskId | null = null

    public constructor(scheduler: Scheduler<TScheduledTaskId>, fetcher: () => Promise<TValue>, frequencyInMilliseconds: number, defaultValue: TValue) {
        this.scheduler = scheduler
        this.fetcher = fetcher
        this.frequencyInMilliseconds = frequencyInMilliseconds
        this.lastKnownGood = defaultValue
        this.outstandingFetch = this.fetch()
    }

    public get cached(): TValue { return this.lastKnownGood }
    public get latest(): Promise<TValue> { return this.outstandingFetch || this.fetch() }

    public registerListener = (listener: (newValue: TValue, oldValue: TValue) => void): void => {
        this.listeners.push(listener)
    }

    private fetch = async (): Promise<TValue> => {
        try {
            if (this.scheduledTaskId !== null) this.scheduler.cancel(this.scheduledTaskId)
            const previousValue = this.lastKnownGood
            this.outstandingFetch = this.fetcher()
            this.lastKnownGood = await this.outstandingFetch
            this.outstandingFetch = null
            this.listeners.forEach(listener => listener(this.lastKnownGood, previousValue))
            return this.lastKnownGood
        } finally {
            this.scheduledTaskId = this.scheduler.schedule(this.frequencyInMilliseconds, this.fetch)
        }
    }
}

The above code is TypeScript, but isn't using any TypeScript features aside from type checking.

Of particular interest are the following lines:

public get latest(): Promise<TValue> { return this.outstandingFetch || this.fetch() }
this.scheduledTaskId = this.scheduler.schedule(this.frequencyInMilliseconds, this.fetch)

In both cases, the library is intentionally not catching the promise because it is possible that no one cares about a failure. If a user does care about fetch failures they can await the promise through latest, but if they just want the last known good value and to completely ignore any polling errors they can interact exclusively with cached. The argument I have against the .catch solution is that in both cases you can see that adding a catch to the promise tree on a parallel branch would make it so the promise cannot be inlined as it is in both cases. The first example would turn into something like:

public get latest(): Promise<TValue> {
    if (this.outstandingFetch) {
        return this.outstandingFetch
    } else {
        const promise = this.fetch()
        promise.catch(() => {})
        return promise
    }
}

and the second example would turn into something like:

this.scheduledTaskId = this.scheduler.schedule(this.frequencyInMilliseconds, () => {
    const promise = this.fetch()
    promise.catch(() => {})
    return promise
}

In the first case I am returning the promise up the call stack thus making it the caller's problem if they don't want to immediately await, so in theory I could apply the .catch() shenanigans there.

The second case is an example of fire-and-forget. If the fetch fails, I don't care because I still have a cached value that I can use. If a user wants to ensure they have an up-to-date value (or get an error if fetching is failing) they can use latest instead of cached. The polling process is meant to be a "best effort" and it will try to update the cached value in the background, but no guarantees are made as to how recent the last successful cached update is.

Keep in mind that the above code is a library that one could (in theory) pick up off of NPM. The current default behavior with unhandled rejection handling means that the library author can't (shouldn't) turn off unhandled rejection handling for all of their users, which leaves them with the only option of adding the empty catch to a branch of the promise tree. process.on('unhandledRejection', e => { /* swallow error */ }) is not a reasonable solution for the library author (unless they want to be extremely opinionated and go against what NodeJS normally does). The same goes for an environment variable.


Re: static analysis, I believe you could statically check TypeScript for programming errors where you forget to await a promise. You are probably correct that it isn't realistically possible with JavaScript.

benjamingr commented 6 years ago

Thanks for spending the time to write that and explain your use case - that's really helpful! I'll try to respond to the different parts below:

I also have followed the same pattern in Java/Kotlin and C# projects in the past (with Futures and Tasks) and in those cases it behaves as I expect (no runtime issues when I ignore or delay processing of a Promise/Future/Task).

Note that C# does throw uncaught rejections (actually, it does things in a way more severe and somewhat complicated way with synchronization contexts - and it's quite a mess in servers). To make the long story short in C# you can't safely just not await a task - you typically have to do something like QueueBackgroundWorkItem which may complete and is cancellation aware. Otherwise the server can (and does) just drop your work in a pretty unsafe way. C# will throw an error if you don't await a task (or QueueBackgroundWorkItem or took care of scheduling yourself) and a request finishes.

Java (or Kotlin) futures aren't promises and CompleteableFutures are kind of a mess (but getting there) - Scala futures (like with Play) are more analogous but I'm not sure we can use either of them as an analogy. There are also some interesting user-land libraries.

In both cases, the library is intentionally not catching the promise because it is possible that no one cares about a failure

In your code above - what happens if a fetch fails with an error that gets replaced before someone observes the error? That is, this.fetch threw (due to the http client failing to release a connection for example and being in an indeterminate state) but outstandingPromise got replaced already?

Wouldn't this sort of code have a potential for a bug where the HTTP agent pool gets exhausted?

I would not alter the above code with an empty .catch(() => {}) block - I would instead do something like:

.catch(e => {
  if(isExpectedFailure(e)) return; // ignore http errors
  // code error instrumentation and close process, most likely
});

The argument I have against the .catch solution is that in both cases you can see that adding a catch to the promise tree on a parallel branch would make it so the promise cannot be inlined as it is in both cases.

Do you believe that for the "dangerous" fire and forget use case the trade off of writing 2 more lines of code isn't worth it? I'm genuinely asking given the (potential) bug in the code above - do you think Node.js should make writing this sort of code easier?

Keep in mind that the above code is a library that one could (in theory) pick up off of NPM. The current default behavior with unhandled rejection handling means that the library author can't (shouldn't) turn off unhandled rejection handling for all of their users, which leaves them with the only option of adding the empty catch to a branch of the promise tree.

There are a few other options we might want to document better:

I do however think that the above code isn't particularly safe similarly to how it makes sense that } catch { } is opt in (rather than opt-out).

(As a side note - I'd make PolledValue an async iterator if there is value in keeping track of it over time - and .resolve latest with a [prevValue, newValue] pair only on modifications so it's possible to only listen to modifications - with an observable being a close second to an async iterator here and a stream being a third - since a promise isn't particularly suited for multiple values)

Re: static analysis, I believe you could statically check TypeScript for programming errors where you forget to await a promise. You are probably correct that it isn't realistically possible with JavaScript.

We are also discussing (and discussed in the summit) a hook for letting you detect when you forgot to await a promise - like we provide in bluebird.

MicahZoltu commented 6 years ago

Do you have a link to details on the C# problem you mentioned? I haven't run into it, but it is possible that I just never ran into the problem for some other reason.

what happens if a fetch fails with an error that gets replaced before someone observes the error

The error gets replaced and no one ever observes it. This is fine, because it means no one cared that it failed.

this.fetch threw (due to the http client failing to release a connection for example and being in an indeterminate state) but outstandingPromise got replaced already?

This isn't an error that the caller should (or likely will) be able to handle specifically. The underlying library should be cleaning up its resources on failure (e.g., try/finally) and not expecting the caller to catch the error, switch on the error details, and then cleanup the library's dangling resources. IMO, expecting that of the user is going to lead to a world of pain and failure because library users will never do that correctly on the first try, it won't be until after there is a critical failure in a production environment and a 3 day retrospective that they even realize what happened and how to resolve it.

I would not alter the above code with an empty .catch(() => {}) block - I would instead do something like:

.catch(e => {
if(isExpectedFailure(e)) return; // ignore http errors
// code error instrumentation and close process, most likely
});

When building services that have strict uptime requirements, authoring your code such that it can recover from unexpected failure is important. In the above example, just because you didn't expect some particular error doesn't mean that you need to crash the process. In some cases, you can ignore the error entirely (like if you are prefetching some resource that the user never ends up needing or you are opportunistically updating some cached value). In other cases, you can let the exception bubble up to a point in your application where you have designed things to recover from any failure. If your application, for example, is processing incoming requests, if one of the requests fails to process you can simply drop the entire request, log the error, and continue processing other requests. If your state is well contained, then you can rest assured that the failed request won't impact the rest of the system negatively (won't leave things in an unexpected state).

Do you believe that for the "dangerous" fire and forget use case the trade off of writing 2 more lines of code isn't worth it? I'm genuinely asking given the (potential) bug in the code above - do you think Node.js should make writing this sort of code easier?

I do believe that the trade off isn't worth it because it does a couple things:

  1. It discourages usage of these patterns by reducing the ergonomics of them. Developer ergonomics is important IMO, and enabling users to easily solve problems without having to "work around" language features is important to getting users to follow patterns most appropriate for their problem.
  2. It makes the code more readable (notably so if you are doing this sort of thing a lot in your codebase) and IMO that is the number one way to reduce bugs.

You can use a promise library which lets you set a local hook (like bluebird)

Every other library out there uses A+ promises (as they should, standards are valuable). Wrapping every promise I get and provide feels like it would end up worse than the two lines for adding the empty catch. 😄

You can add a unhandledRejection handler and filter out your own errors (by looking at the stack trace for instance)

I hadn't considered that, I'll think more on it to see if I can come up with a reliable way of identifying whether the unhandled promise is mine or someone else's. Async stack traces are often not as clean/clear as one would like, but perhaps in this case they are "clear enough"?


Overall, I get the impression that you are a fan of the "if anything goes wrong, just kill the process and start over" philosophy. The problem I have with this philosophy is that an application may be doing many things at the same time (like a web server servicing many simultaneous requests) and killing the process will throw out all of the perfectly fine/healthy requests along with the one bad one. If on the other hand your application does a good job at isolating each request's processing state then you can simply throw out the bad one and keep the rest going.

benjamingr commented 6 years ago

The error gets replaced and no one ever observes it. This is fine, because it means no one cared that it failed.

Then you have just leaked a connection - I'll ping a client to ask if I can share the details of an incident which was exactly this and took 100 node servers down last month :)

This isn't an error that the caller should (or likely will) be able to handle specifically.

Exactly, so the caller should propagate it up to whomever maybe can handle it - at some point someone might be able to handle it or maybe the process needs to be restarted but that's not a call the http library gets to make for your whole app.

When building services that have strict uptime requirements, authoring your code such that it can recover from unexpected failure is important.

Well, that's true (and fair) and indeed people have been using uncaughtException to recover from thrown errors for uptime - but consistency is no less important than uptime and leaving an app running in an inconsistent state isn't something that should be done lightly.

Overall, I get the impression that you are a fan of the "if anything goes wrong, just kill the process and start over" philosophy.

I'm going to surprise you here: Definitely not. However I am a fan of "Node.js should be consistent and provide a good debugging experience to developers". If we did not crash the process on synchronous exceptions we wouldn't need to do it on asynchronous ones. If your application does a good job at isolating each request's processing state then you do not need to worry about unhandled rejections anyway since all rejections are handled with catches.

At the moment people switching from callbacks to async functions get behaviour a lot of them consider very confusing - for the very least we want to expose the capability to throw on GC, evaluate it for one version and if it's as useful as the unhandledRejection warnings turn it on by default (this will go to the TSC regardless and any party interested will get a chance to (and be encouraged to) speak up).

Again - I am explicitly in favour of giving users the ability to pick whatever behaviour they want - if users want to ignore errors or rejections then they're welcome to do so 100%.

I'd like to point out that none of us are 100% sure about the way forward, we have a lot of ideas and things that may help - this proposal is the best we've come up with yet in the past 3-5 years but I am definitely in favour of a better one if someone comes up with it.

MicahZoltu commented 6 years ago

I think I would be fully satisfied with "let the user choose" if the granularity of the choice was at library level. If I could (easily/reliably) setup an uhnandled promise rejection handler for all of the exceptions bubbling through or from my library, while letting the user of my library decide what to do with exceptions that don't flow through my code.

My problem with the current solution is that as a library author, if I follow patterns like the one I shared then users of my library either need to globally disable unhandled rejections, which may not jive well with their programming style. If my library has bugs, those should be reported to me and I can suffer the pain of debugging if I decided to fully drop errors, but I don't want to force that pain onto users of my library.

I certainly appreciate the value of making debugging promises easier for developers, and in many cases people will not be engaging in aggressive prefetching, opportunistic cache updating, etc. and all warnings/crashes are actionable by the end user.

benjamingr commented 6 years ago

@MicahZoltu definitely agree with you on the choice there. Let me know if what you came up with as patterns for individual libraries opting out of the unhandled rejection tracking (based on the hook + detecting where the rejection is from) and how we can help with that from core.

I am on board with any solution that has both benefits: easy opt out and a default that is better for debugging for users. While I wouldn't write code the way you did above (due to the potential issue I outlined) I definitely don't think that it's Node's place to tell people how to write such code and think there should be an easy way to opt-out of it as third-party modules.

Thanks for weighing in - definitely an interesting perspective to consider before we turn it on.

targos commented 5 years ago

Closing this issue because the repository is being archived: https://github.com/nodejs/admin/issues/283