fails-components / webtransport

Http/3 webtransport support for node
Other
158 stars 24 forks source link

Server interface #99

Closed achingbrain closed 10 months ago

achingbrain commented 2 years ago

The server side is fully unspecified

https://github.com/fails-components/webtransport/pull/94#discussion_r1008704145

This is very true.

From what I can see, the expected way to process incoming requests is this:

const server = this.server = new Http3Server({ /* Http3ServerOpts */ })
server.startServer()

const sessionStream = await server.sessionStream('/some/path')
const sessionReader = sessionStream.getReader()

// process incoming connections
while (true) {
  const { done, value: session } = await sessionReader.read()

  if (done) {
    // server is going away?
    break
  }

  await session.ready
  // do something with session
}
  1. There's no way to tell when a server is ready to accept connections (see the listening event from net.Server)
  2. There's no way to tell what the server address is after listening (for when you listen on port 0 for example - see server.address() in net.Server)
  3. Having to await sessionReader.read() instead of respond to events means the process incoming connections loop has to be done in a separate context (via an IIFE or Promise.resolve().then())
  4. await session.ready in the process incoming connections loop is a footgun as you can't start processing the next incoming session until the current one is ready

How about adding something a bit closer to the existing HTTP2 server in node using events?

import { createServer } from '@fails-components/webtransport'

const server = createServer({ /* Http3ServerOpts */ }, (session) => {
  // do something with session
})
server.on('listening', () => {
  // the server is ready to accept connections
})
server.on('error', (err) => {
  // an error occurred while listening
})
server.listen(1234)

This can be an additional export so as to not break existing code? Am happy to make a PR.

martenrichter commented 2 years ago

How about adding something a bit closer to the existing HTTP2 server in node using events?

import { createServer } from '@fails-components/webtransport'

const server = createServer({ /* Http3ServerOpts */ }, (session) => {
  // do something with session
})
server.on('listening', () => {
  // the server is ready to accept connections
})
server.on('error', (err) => {
  // an error occurred while listening
})
server.listen(1234)

This can be an additional export so as to not break existing code? Am happy to make a PR.

No, I am not into the idea. An additional interface means also a long time commitment to keep it up and running. The idea was, that the session interface is in the same way the stream interface works inside the sessions so that we have the same logic. Btw. it seems that you would like to have similar interfaces for similar tasks you may also want to look on my websocket-webtransport-ponyfill... I answer an the other points separately.

martenrichter commented 2 years ago
  1. There's no way to tell when a server is ready to accept connections (see the listening event from net.Server)

I think a promise server.ready would be the way to go to keep the spirit of the webtransport interfaces.

martenrichter commented 2 years ago

2. There's no way to tell what the server address is after listening (for when you listen on port 0 for example - see server.address() in net.Server)

If you need it, probably something that gathers server information can be a way like a getstats or getserverinfo function. But of course, promise based, or it can be communicated alongside the logic with server.ready from the c++ side, but values are allowed to change with quic, though it is not implemented.

martenrichter commented 2 years ago

3. Having to await sessionReader.read() instead of respond to events means the process incoming connections loop has to be done in a separate context (via an IIFE or Promise.resolve().then())

I think this is true but is this a problem? You may want to look at https://github.com/fails-components/avsrouter/blob/master/src/avsrouter.js where I use the package and deal with many sessions. Not the best coding style... There, I call a function immediately when I have a new session.

martenrichter commented 2 years ago

4. await session.ready in the process incoming connections loop is a footgun as you can't start processing the next incoming session until the current one is ready

No, you just have to replace it with a call to a function with the session as an argument and then await in this async function and it behaves much like your event interface, and if you do not want to lose the context use a local function.

achingbrain commented 2 years ago

You may want to look at fails-components/avsrouter@master/src/avsrouter.js

This line contains an unawaited async function. If it throws it will cause an unhandledPromiseRejection and crash the process. You've wrapped it in a try/catch but it won't catch an error thrown from an async function, for this you need to await on it, which will block the loop until the promise resolves and is the footgun I mention above or it needs a .catch() adding to the end.

For me, easy-to-introduce bugs like these are definite signs that the await-in-a-loop pattern is not suitable for the server side and it would be better as an event emitter, like the http2 server in node core.

martenrichter commented 2 years ago

You may want to look at fails-components/avsrouter@master/src/avsrouter.js

This line contains an unawaited async function. If it throws it will cause an unhandledPromiseRejection and crash the process. You've wrapped it in a try/catch but it won't catch an error thrown from an async function, for this you need to await on it, which will block the loop until the promise resolves and is the footgun I mention above or it needs a .catch() adding to the end.

For me, easy-to-introduce bugs like these are definite signs that the await-in-a-loop pattern is not suitable for the server side and it would be better as an event emitter, like the http2 server in node core.

Ok, what about calling an unawaited async function, which calls an awaited async function wrapped into a try-catch? Does this solve the problem? And indeed I am suffering from unhandledPromiseRejection and I am permanently trying to catch all of them Also if this is really a big problem it applies to streams and sessions as well. But if this workaround works, I think we can easily wrap all interfaces to the old style way including not only the promises but also the streams.

achingbrain commented 2 years ago

Ok, what about calling an unawaited async function, which calls an awaited async function wrapped into a try-catch? Does this solve the problem?

An unawaited async function that throws will crash the process so I don't think this is a good solution. It may look like it works today, but tomorrow someone may change that function and then all bets are off.

You can add a .catch() to the end of the unawaited async function and handle the error appropriately, but it's still not great as a pattern as it can cause performance problems - it's hard to know when to apply backpressure because the async function executes immediately and the calling code has no idea when it's finished.

And indeed I am suffering from unhandledPromiseRejection and I am permanently trying to catch all of them

typescript-eslint has a rule that I've found incredibly useful in the past - no-floating-promises.

martenrichter commented 2 years ago

You can add a .catch() to the end of the unawaited async function and handle the error appropriately, but it's still not great as a pattern as it can cause performance problems - it's hard to know when to apply backpressure because the async function executes immediately and the calling code has no idea when it's finished.

But this sound like a solution (thanks), performance is probably not an issue, since these are only new clients, streams and sessions (and the main performance issues should arrive in the streams). So why not add an alternative interface in this way (using a wrapper to keep mainteance low) and you should have what you want.

martenrichter commented 2 years ago

Should not these issues also arise on a browser, for processing streams? Should not the people designing the interface also have an idea, that this could be a problem (think about stream interface). For me it does not sound to be a unique server problem.

achingbrain commented 2 years ago

So why not add an alternative interface in this way (using a wrapper to keep mainteance low) and you should have what you want.

Will do 👍

Should not these issues also arise on a browser, for processing streams?

Yes, but I don't think it's as big of an issue - a browser might only connect to a handful of peers whereas a node.js process running on a server could have hundreds if not thousands of concurrent connections.

martenrichter commented 2 years ago

And do I have your permission to take the test code and the coming interface wrapper to the sister project... https://github.com/fails-components/webtransport-ponyfill-websocket

achingbrain commented 2 years ago

Of course!

achingbrain commented 2 years ago

I'm going to add a whole bunch more tests, particularly around failure cases, please do feel free to use whatever you find useful.

martenrichter commented 2 years ago

Ok I have added the tests: https://github.com/fails-components/webtransport-ponyfill-websocket/commit/702bec03f51eed93b24be871dd7b9caa9743e42a Interestingly, this implementation has a problem, if the sessions are not properly closed. Also some client server constellations have problems closing the server, the tests have uncovered it, will need more work.