Open jan-ivar opened 3 years ago
Heh, turned out it was an output problem. Removing the server output gives me comparable speed: 2.3 seconds. 😊
Phew, good news. We should add the use-case then I think.
I should add this is still much lower performance than a mock file upload over bidirectionalStream with 16k chunks at 0.2 seconds. But there are likely JS reasons for that. E.g. I should probably use something more performant than blob.slice
.
Jan-Ivar said:
"If someone wants to send megabytes of data over datagrams, using their own homerolled framing scheme and back-channel for dealing with packet loss, can we support that?"
[BA] I hope so, because realtime communications use cases will depend on this (e.g. game streaming can consume upwards of 20 megabits/second).
I'll try to provide a PR to discuss for next meeting.
Datagrams are the lowest-level building blocks we expose. But are they as performant as streams?
From https://github.com/w3c/webtransport/issues/105: "imagine implementing QUIC streams (or TCP) on top of DatagramTransport."
If someone wants to send megabytes of data over datagrams, using their own homerolled framing scheme and back-channel for dealing with packet loss, can we support that?
If so, the examples on sending datagrams seem lacking here, not illustrating the necessary chunking of large data into
transport.maxDatagramSize
(with some custom framing needed obviously) and piping to datagrams using e.g. apull
-based ReadableStream to send at the max rate the user agent can do.I did a comparison in Canary, of a mock file upload over a bidirectionalStream using a tiny chunk size matching datagrams vs. a mock file upload over datagrams without any kind of framing or packet loss recovery.
~Uploading a 4.4 megabyte file took 2.3 seconds with the former and 17 seconds with the latter. 🤔~
Should we add this use case to ensure this will be performant? Or do datagrams have some inherent performance limitation?