w3c / webtransport

WebTransport is a web API for flexible data transport
https://w3c.github.io/webtransport/
Other
845 stars 50 forks source link

Improve Server→Client Stream Performance by Allowing Customizable Concurrency Limits in WebTransport #544

Closed cybersoulK closed 8 months ago

cybersoulK commented 1 year ago

When using WebTransport to send multiple streams simultaneously, there appears to be a performance bottleneck. Specifically, sending 10,000 small unidirectional streams concurrently results in a delay of approximately 5 seconds for the streams to arrive, which is far from the expected near-instantaneous delivery.

My use case: as a game developer, i rather spawn 1 stream per object spawn, rather than manually have to group them in an additional system. This problem should be resolved at the transport level, because the messages are independent from each other.

Proposed Enhancement: To address this performance issue, it would be beneficial to add a setMaxConcurrentUniStreams and setMaxConcurrentBiStreams function. This would allow developers to customize the maximum number of streams that can be opened concurrently.

martinthomson commented 1 year ago

What software (server, client) is your test using?

We have discussed having some means to signal to the browser to raise concurrency limits for a session. Servers should be able to scale their concurrency accordingly.

martinthomson commented 1 year ago

See #446.

cybersoulK commented 1 year ago

@martinthomson

I built my own server with quinn.rs / and webtransport client.

446 seems to propose a dynamic incremental alghoritm, which would not work for me.

In my application, the server sends many small streams, each with a single message (~120 bytes each). These are sent in high count suddenly, to initialize all of the independent entities in the ECS world.

My proposal was to let the client decide the limit as a setting, that should match the application requirements. I personally would set 10.000 streams. (my application burst spike)

cybersoulK commented 1 year ago

i understand that some applications can slowly increment their needs, and pre-cache and reuse streams.

But for other real time applications such as games, we want to open a stream, send one independent reliable message, and close it, and don't have to worry about the total stream count.

(my hacky transport layer over datagrams works better than uni-streams at the moment. 😔 )

martinthomson commented 1 year ago

Yeah, I don't think that dynamic response is necessarily a good plan, especially when you know, ahead of time, what you need. That's why I think we should have a means to ask the browser to open up its limits.

aboba commented 1 year ago

@cybersoulK Do you have a small code sample that can reproduce the issue? We want to verify that this relates to the maximum stream limits, not other issues.

cybersoulK commented 1 year ago

@aboba my code is in rust, and is not open source. (and would take considerable effort to create a MVP)

I implemented my code using quinn, and the native client works as expected when setting:

connection.set_max_concurrent_uni_streams(10000);

i don't see this API on the webtransport javascript, that's why i thought this might be the issue

jan-ivar commented 1 year ago

Meeting:

jan-ivar commented 1 year ago

A constructor only lets you set it once though. Does it need to be an attribute so the application can change it?

const wt = new WebTransport(url, {maxConcurrentReceiveStreams: 10000});
console.log(wt.maxConcurrentReceiveStreams); // 10000
wt.maxConcurrentReceiveStreams = 20000;
wt.maxConcurrentReceiveStreams = null;
jan-ivar commented 1 year ago

I've renamed this issue to focus on server→client in contrast to #446.

jan-ivar commented 1 year ago

Meeting:

jan-ivar commented 1 year ago

Regarding this being a "hint", I'd rather specify a bit more detail to aid WPT testing of this. I think a good precedent to follow here is jitterBufferTarget, which means we should probably define a minimum and maximum range that we throw on.

How about [100, 100000]?

martinthomson commented 1 year ago

I would think that a minimum of 0 is fine, if potentially not very useful. The maximum should probably be implementation-defined, but setting a minimum value that an implementation is required to support is worthwhile.

Allowing 100k is a pretty big commitment. Streams take up more space than jitter buffer samples. And they will be kept outside of the space that we attribute to the site. I'd prefer if it were smaller, but would be OK if we ended up with 100k.

wilaw commented 1 year ago

Responses received from IETF #118.

Marten Seemann: Making it configurable seems reasonable, is there any API to see what the max is?

Will: There’s no max API, would you like one?

Marten: Maybe :)

Jan-Ivar: Most of here is on server side, so wouldn’t impact what you’d be able to create client side. There’s a slight difference whether it came from server or client and what direction it is.

Will: Inconclusive feedback, but we appreciate the time.

jan-ivar commented 1 year ago

Allowing 100k is a pretty big commitment.

For sure. I didn't mean to imply a commitment, only a max insane level we all agree to throw on in the API. The user agent would still be allowed to clamp input to its own (lower) maximum. At least that's what jitterBufferTarget does.

wilaw commented 12 months ago

"max" implies rejection above a certain limit. This is intended as more of a general hint to the UA. For bikeshedding, how about:

const wt = new WebTransport(url, { expectedConcurrentIncomingUnidirectionalStreams: 10000, expectedConcurrentIncomingBidirectionalStreams: 30 });

A valid range for testing might be [0 .. 20000]