Closed aochagavia closed 1 year ago
I just pushed updated commits with the changes you suggested 😉
@djc @Ralith Here's a reminder just in case this slipped through the cracks. I'll need these changes for my work on ACK frequency (not yet blocked, though).
Still high on my list :+1:
Just a heads up: I'm working on an additional commit to close #1540
The last commit adds a base_udp_payload_size
to TransportConfig
, meant to configure the payload size that is guaranteed to be supported by the network. Adding it to MtuDiscoveryConfig
, as @Ralith suggested, doesn't work here, because this is a property of the black hole detector (i.e. we want to use it even if MTU discovery is is disabled).
By the way, I felt a bit limited by the fact that we aren't using the builder pattern for TransportConfig
, with a build
function or some such where to do validation. To keep things user-friendly, I decided to automatically raise the initial_max_udp_payload_size
when the base_udp_payload_size
is raised (instead of crashing, which would force the user to set the values in a particular order). Have you thought about biting the bullet and introducing a builder? There is otherwise quite some implicit stuff going on, like values < 1200 being ignored when passed to initial_max_udp_payload_size
.
Thanks for the review! Just pushed changes addressing all comments 😉
Just FYI, I'd like to fininsh the ACK frequency PR no later than next week (it is not 100% sure yet whether funding will be extended after that period), so it would be helpful if this can get merged soon (the changes related to congestion control are necessary for ACK frequency).
Just amended the relevant commits to apply @djc's suggestions
Thanks!
This PR introduces a few self-contained improvements to make PLPMTUD work better.
We will probably want to discuss the API impact of the change to the congestion controller code:
on_mtu_update
function on theController
traitmax_datagram_size
from all controller configs (the current MTU is used now instead)minimum_window
from all controller configs (it is now calculated on demand based on the current MTU)With all this in place, running the top-level
bench
crate with a MTUD upper bound of 9000 results in sending rates of ~900 MiB/s (on my machine ™).