ricea / websocketstream-explainer

Explainer for the WebSocketStream JavaScript API
Apache License 2.0
126 stars 5 forks source link

Better spec for how close() / cancel() and the closing promise should act #15

Open etodanik opened 2 years ago

etodanik commented 2 years ago

How is close() expected to behave if the user stopped reading the ReadableStream but there are a few messages in the queue up for grabs?

If you try to implement this spec with a simple ReadableStream you could have some unread messages that the stream didn't pull from the underlying because the user hasn't requested them via read(), the stream won't actually get to reading the closing handshake - because the end user still hans't pulled the latest message available in the ReadableStream.

For example, the current Deno implementation (which is rightfully hidden behind an --unstable flag) is broken because if you have more than 1 incoming message in the queue, and you're not intending to read it before or after calling close(), the closing promise will essentially remain unresolved. Here is an example of this behavior: https://github.com/denoland/deno/issues/15616

I think the spec needs to be clearer about the mechanics of how to do this, to prevent parties that implement this functionality from running into the same issue and getting too divergent on how they solve this problem.

For example:

  1. You could say it's the responsibility of the user to explicitly fast forward the reader() until you get the close frame message.
  2. You could fill up the controller queue "for the user" even if he won't read it (I guess it's kind of my preferred way?)
  3. How do you react if the server you're interfacing with is not acting to-spec? The RFC spec says "wait a reasonable amount of time and then disconnect (the RFC 6455 section on close frames)". What is this reasonable amount of time? Should it be configurable in WebSocketStreamOptions?

I think this part is trickier than it seems and it can benefit from a higher resolution description on the spec itself so that everyone implements it in a way that would behave the same. Right now it leaves too much room to the imagination.

@ricea @tomayac

tomayac commented 2 years ago

@ricea is the spec owner. From my DevRel point of view, I agree with you that 2. looks like the way that most developers would expect.

ricea commented 2 years ago

Yes, this is mildly broken in Chrome too. If the WebSocket is blocked due to backpressure then we'll never read the Close frame.

Forcing the user to read from the ReadableStream is effectively what we do, but it goes against the principle of the close() method that it should close the socket no matter what.

Enqueuing unread messages when close() is called, disregarding backpressure, seems generally like a good solution, but the edge case where the server just keeps sending more data forever worries me.

For point 3., are you referring to the

   server MUST close the underlying TCP
   connection immediately; the client SHOULD wait for the server to
   close the connection but MAY close the connection at any time after
   sending and receiving a Close message, e.g., if it has not received a
   TCP Close from the server in a reasonable time period.

section from RFC6455? Chrome uses a timeout of 2 seconds before closing the connection itself (many servers seem to get this wrong). If we make it configurable I worry that everyone will just set it to zero, defeating the purpose of avoiding the client getting stuck in TIME_WAIT state.

etodanik commented 2 years ago

Yes, from what I can see enqueuing unread messages is the best compromise. This is exactly what I wrote in the PR I contributed at Deno to fix this behaviour, and it fixed my use case nicely.

I see the worry about everyone setting it to ZERO. Two seconds seems like a nice time. 5 seconds is the maximum time for most timeouts recommended in the RFC.

Can we get away with setting something like Math.min(2000, options.closeTimeout) on the option? Basically meaning that it can only be customised UPWARDS only and not downwards (in case someone wants to properly close things with a server that has the behaviour of sending tons of data before the close frame and may take a while)?

This would solve the worry about the server just sending unlimited data while generally working well with RFC compliant servers.

etodanik commented 1 year ago

@ricea Just pinging here, since we're using this in production with Deno's version of WebSocketStream, highly interested in seeing this resolved.

ricea commented 1 year ago

Sorry. I haven't had time to work on this. If you happen to have time to draft a PR to specify the behaviour, that would be a big help.