quinn-rs / quinn

Async-friendly QUIC implementation in Rust
Apache License 2.0
3.57k stars 364 forks source link

Add TransportConfig::max_backoff_exponent #1779

Closed hrntknr closed 3 months ago

hrntknr commented 3 months ago

The amount of PTO period increase can be adjusted by setting TransportConfig::max_backoff_exponent. ref: RFC9002#6.2.4

Ralith commented 3 months ago

Can you elaborate some on the motivation for this? Should we consider e.g. an absolute time threshold rather than a relative one?

hrntknr commented 3 months ago

Thanks for the review.

The background is that in long sessions, I would like to detect the client's return to communication as soon as possible. For example, if a client goes to sleep with a session open, the current implementation has no control over the PTO duration, so it is impossible to set idle_timeout to a long (or unlimited) period of time and still have the client return in a realistic time. (I have to allow for the load on the network and server, though.)

I may be wrong, though,

Should we consider e.g. an absolute time threshold rather than a relative one?

I think the answer is NO for this use case.

(However, when the client goes into long sleep, UDP timeout is performed on the client side and it seems to be reconnected upon return. I will investigate this a bit as connection migration does not seem to work well at these cases.)

Ralith commented 3 months ago

Could you use a more normal, short, idle timeout and let clients create new connections under those conditions? Alternatively, could you arrange for clients to proactively send a message after resuming from a long sleep?

I think the answer is NO for this use case.

Why?

when the client goes into long sleep, UDP timeout is performed on the client side

I'm not sure what this means. UDP doesn't really have timeouts.

connection migration does not seem to work well at these cases

What does migration have to do with this? Are you talking about how NAT bindings will probably be lost if a client disappears?

hrntknr commented 3 months ago

Could you use a more normal, short, idle timeout and let clients create new connections under those conditions? Alternatively, could you arrange for clients to proactively send a message after resuming from a long sleep?

Yes, that is a more appropriate approach. Agreed.

In write_all, it should await until UDP is sent, but not wait until ack is received. Is there any way to know which packets have been received (whether or not an ack response has been received)? Would this need to be implemented at the application layer?

I think the answer is NO for this use case.

Why?

In this use case, drops occur, but reorders do not occur frequently. So I thought that time_threshold was not necessary to take into account...but maybe I just don't understand the protocol well enough.

What does migration have to do with this? Are you talking about how NAT bindings will probably be lost if a client disappears?

I have tried to recheck the behavior by doing a packet edit with TC filter and it does not seem to recur. (connection migration succeeded.) Forget about it for once.

Ralith commented 3 months ago

Yes, that is a more appropriate approach. Agreed.

Great; no changes are needed to Quinn, then.

Is there any way to know which packets have been received (whether or not an ack response has been received)? Would this need to be implemented at the application layer?

This must be implemented at the application layer, because whether an ACK has been received or not doesn't really tell you anything useful. You presumably need to know whether your application has processed some data, which the transport layer has no insight into.