quinn-rs / quinn

Async-friendly QUIC implementation in Rust
Apache License 2.0
3.85k stars 394 forks source link

Configure receive window per connection #1386

Closed lijunwangs closed 2 years ago

lijunwangs commented 2 years ago

Problem: Currently the receive_window on the server side is obtained from the ServerConfig stored in the Endpoint, which will make all connections having the same receive_window and making per connection data limits difficult.

Solution: Create interfaces to set the receive_window per connection.

This is to address https://github.com/quinn-rs/quinn/issues/1342 to make receive_window configurable per connection. It only handles receive_window not the stream_receive_window which can be addressed in a separate PR.

lijunwangs commented 2 years ago

Hi @Ralith, can you take a look at this?

Ralith commented 2 years ago

The relevant review from #1384 still applies. Copying here for convenience:

To be effective, set_receive_window must increase StreamsState::local_max_data by the difference between the new connection-level receive window and the old one, and set pending.max_data = true. If the window shrinks, then instead local_max_data must not be changed, but the difference in credit passed to add_read_credits in future calls should be silently absorbed.

To limit the risk of drift, let's cite the methods on TransportConfig (with intra-doc links) rather than duplicating their docs.

lijunwangs commented 2 years ago

The relevant review from #1384 still applies. Copying here for convenience:

To be effective, set_receive_window must increase StreamsState::local_max_data by the difference between the new connection-level receive window and the old one, and set pending.max_data = true. If the window shrinks, then instead local_max_data must not be changed, but the difference in credit passed to add_read_credits in future calls should be silently absorbed.

To limit the risk of drift, let's cite the methods on TransportConfig (with intra-doc links) rather than duplicating their docs.

Thanks @Ralith on the local_max_data when window is shrank, just to confirm when you say "absorbed", it means when we add_read_credits, we do not do the following:

self.local_max_data = self.local_max_data.saturating_add(credits)

Instead we do self.local_max_data = self.local_max_data.saturating_add(credits - (shrink_diff)) where shrink_diff = (new_receive_window - old_receive_window)? In addition. I think we would need to store this delayed diff, and once applied, set it to 0.

Ralith commented 2 years ago

Instead we do self.local_max_data = self.local_max_data.saturating_add(credits - (shrink_diff)) where shrink_diff = (new_receive_window - old_receive_window)?

Close, but some state is required, because it could take many calls to add_read_credits until the excess window size from a single reduction is absorbed. You should compute the excess window size, then so long as that amount is nonzero, subtract from it instead of adding to local_max_data. Immediately after hits zero, every following credit goes back into adding to local_max_data. This will also need unit testing to verify that credit is handled correctly even when read credit increments are large/small/etc.

lijunwangs commented 2 years ago

Hi @Ralith, on the following

To be effective, set_receive_window must increase StreamsState::local_max_data by the difference between the new connection-level receive window and the old one, and set pending.max_data = true.

Which pending are you referring to? The one in RecvStream? Or in in StreamsState? I do not see the pending in StreamsState has max_data field.

Ralith commented 2 years ago

self.spaces[SpaceId::Data].pending in Connection, specifically.

lijunwangs commented 2 years ago

@Ralith -- I have addressed your feedback. Could you take another look? Thanks

lijunwangs commented 2 years ago

@Ralith -- a gentle reminder, can you review this again?

lijunwangs commented 2 years ago

Thanks @Ralith for your detailed reviews. Can you help merge this PR? I do not have permission to merge the PR against this repo. Thanks

Ralith commented 2 years ago

Usually I prefer to give @djc an opportunity to weigh in on nontrivial changes. Do you specifically need it merged quickly?

lijunwangs commented 2 years ago

Usually I prefer to give @djc an opportunity to weigh in on nontrivial changes. Do you specifically need it merged quickly?

Yes -- we have a project which we plan to use this feature tweak different data limits for different connections for the same endpoint from the server side -- related to issue https://github.com/quinn-rs/quinn/issues/1342.

djc commented 2 years ago

I'll review this today.

lijunwangs commented 2 years ago

Hi @Ralith and @djc, I wonder if it is possible to merge this to the 0.8.x branch? Our application is based off 0.8.x branch.

djc commented 2 years ago

Sure, please submit a PR to the 0.8.x branch (squashing all the changes).