Closed achingbrain closed 10 months ago
How about adding something a bit closer to the existing HTTP2 server in node using events?
import { createServer } from '@fails-components/webtransport' const server = createServer({ /* Http3ServerOpts */ }, (session) => { // do something with session }) server.on('listening', () => { // the server is ready to accept connections }) server.on('error', (err) => { // an error occurred while listening }) server.listen(1234)
This can be an additional export so as to not break existing code? Am happy to make a PR.
No, I am not into the idea. An additional interface means also a long time commitment to keep it up and running. The idea was, that the session interface is in the same way the stream interface works inside the sessions so that we have the same logic. Btw. it seems that you would like to have similar interfaces for similar tasks you may also want to look on my websocket-webtransport-ponyfill... I answer an the other points separately.
- There's no way to tell when a server is ready to accept connections (see the listening event from
net.Server
)
I think a promise server.ready
would be the way to go to keep the spirit of the webtransport interfaces.
2. There's no way to tell what the server address is after listening (for when you listen on port
0
for example - see server.address() innet.Server
)
If you need it, probably something that gathers server information can be a way like a getstats or getserverinfo function.
But of course, promise based, or it can be communicated alongside the logic with server.ready
from the c++ side, but values are allowed to change with quic, though it is not implemented.
3. Having to
await sessionReader.read()
instead of respond to events means theprocess incoming connections
loop has to be done in a separate context (via an IIFE orPromise.resolve().then()
)
I think this is true but is this a problem? You may want to look at https://github.com/fails-components/avsrouter/blob/master/src/avsrouter.js where I use the package and deal with many sessions. Not the best coding style... There, I call a function immediately when I have a new session.
4.
await session.ready
in theprocess incoming connections
loop is a footgun as you can't start processing the next incoming session until the current one is ready
No, you just have to replace it with a call to a function with the session as an argument and then await in this async function and it behaves much like your event interface, and if you do not want to lose the context use a local function.
You may want to look at fails-components/avsrouter@master/src/avsrouter.js
This line contains an unawaited async function. If it throws it will cause an unhandledPromiseRejection
and crash the process. You've wrapped it in a try/catch but it won't catch an error thrown from an async function, for this you need to await
on it, which will block the loop until the promise resolves and is the footgun I mention above or it needs a .catch()
adding to the end.
For me, easy-to-introduce bugs like these are definite signs that the await-in-a-loop pattern is not suitable for the server side and it would be better as an event emitter, like the http2 server in node core.
You may want to look at fails-components/avsrouter@master/src/avsrouter.js
This line contains an unawaited async function. If it throws it will cause an
unhandledPromiseRejection
and crash the process. You've wrapped it in a try/catch but it won't catch an error thrown from an async function, for this you need toawait
on it, which will block the loop until the promise resolves and is the footgun I mention above or it needs a.catch()
adding to the end.For me, easy-to-introduce bugs like these are definite signs that the await-in-a-loop pattern is not suitable for the server side and it would be better as an event emitter, like the http2 server in node core.
Ok, what about calling an unawaited async function, which calls an awaited async function wrapped into a try-catch? Does this solve the problem? And indeed I am suffering from unhandledPromiseRejection and I am permanently trying to catch all of them Also if this is really a big problem it applies to streams and sessions as well. But if this workaround works, I think we can easily wrap all interfaces to the old style way including not only the promises but also the streams.
Ok, what about calling an unawaited async function, which calls an awaited async function wrapped into a try-catch? Does this solve the problem?
An unawaited async function that throws will crash the process so I don't think this is a good solution. It may look like it works today, but tomorrow someone may change that function and then all bets are off.
You can add a .catch()
to the end of the unawaited async function and handle the error appropriately, but it's still not great as a pattern as it can cause performance problems - it's hard to know when to apply backpressure because the async function executes immediately and the calling code has no idea when it's finished.
And indeed I am suffering from unhandledPromiseRejection and I am permanently trying to catch all of them
typescript-eslint has a rule that I've found incredibly useful in the past - no-floating-promises.
You can add a
.catch()
to the end of the unawaited async function and handle the error appropriately, but it's still not great as a pattern as it can cause performance problems - it's hard to know when to apply backpressure because the async function executes immediately and the calling code has no idea when it's finished.
But this sound like a solution (thanks), performance is probably not an issue, since these are only new clients, streams and sessions (and the main performance issues should arrive in the streams). So why not add an alternative interface in this way (using a wrapper to keep mainteance low) and you should have what you want.
Should not these issues also arise on a browser, for processing streams? Should not the people designing the interface also have an idea, that this could be a problem (think about stream interface). For me it does not sound to be a unique server problem.
So why not add an alternative interface in this way (using a wrapper to keep mainteance low) and you should have what you want.
Will do 👍
Should not these issues also arise on a browser, for processing streams?
Yes, but I don't think it's as big of an issue - a browser might only connect to a handful of peers whereas a node.js process running on a server could have hundreds if not thousands of concurrent connections.
And do I have your permission to take the test code and the coming interface wrapper to the sister project... https://github.com/fails-components/webtransport-ponyfill-websocket
Of course!
I'm going to add a whole bunch more tests, particularly around failure cases, please do feel free to use whatever you find useful.
Ok I have added the tests: https://github.com/fails-components/webtransport-ponyfill-websocket/commit/702bec03f51eed93b24be871dd7b9caa9743e42a Interestingly, this implementation has a problem, if the sessions are not properly closed. Also some client server constellations have problems closing the server, the tests have uncovered it, will need more work.
https://github.com/fails-components/webtransport/pull/94#discussion_r1008704145
This is very true.
From what I can see, the expected way to process incoming requests is this:
net.Server
)0
for example - see server.address() innet.Server
)await sessionReader.read()
instead of respond to events means theprocess incoming connections
loop has to be done in a separate context (via an IIFE orPromise.resolve().then()
)await session.ready
in theprocess incoming connections
loop is a footgun as you can't start processing the next incoming session until the current one is readyHow about adding something a bit closer to the existing HTTP2 server in node using events?
This can be an additional export so as to not break existing code? Am happy to make a PR.