quicwg / multipath

In-progress version of draft-ietf-quic-multipath
Other
49 stars 17 forks source link

Discussion: Remove ACK_MP frame from draft-ietf-quic-multipath ? #271

Closed iyangsj closed 1 month ago

iyangsj commented 11 months ago

Briefly, the argument is as follows:

Perhaps a better approach would be to follow the KISS principle:

LPardue commented 11 months ago

Seems like a dupe of https://github.com/quicwg/multipath/issues/181, which was closed. Have you got any additional points that were not already addressed there?

iyangsj commented 11 months ago

@LPardue Thank you for such helpful information. It is challenging to achieve a simpler and more elegant solution without making some trade-offs on requirements and suitable changes to conceptual models. I would like to provide a detailed description later.

LPardue commented 11 months ago

Got it. To be clear, since #181 is now resolved it reflects the established consensus of the group. Therefore, the bar to reconsidering that consensus and making any changes is going to be higher. We'd be looking for convincing new arguments in a timely manner in order to revisit this matter.

iyangsj commented 11 months ago

The conceptual models of both QUIC and MPQUIC connections are similar because NAT rebinding occurs in both cases (as illustrated in the figure below). However, a QUIC connection is a simplified version of an MPQUIC connection, as it consists of only one master path, whereas an MPQUIC connection can include one or more master paths.

MPQUIC-conceptual-model

(Note: The Initial/Handshake packet number spaces, which are discarded after the handshake is complete, have been omitted for clarity.)

The multipath mechanism is mostly consistent with the draft-ietf-quic-multipath, but some differences are emphasized as follows:

Advantages of the proposed approach:

mirjak commented 9 months ago

In issue #181 it was already decided that it is better to be explicit and use ACK_MP. This also enables sending ACK over different paths which is seem as a feature, e.g. always using the lowest latency path. I don't see any new issue or concern raised here. Therefore I propose to close this issue with no action.

iyangsj commented 9 months ago

In issue https://github.com/quicwg/multipath/issues/181 it was already decided that it is better to be explicit and use ACK_MP. This also enables sending ACK over different paths which is seem as a feature, e.g. always using the lowest latency path.

The existing design choice for ACK_MP is rooted in the per-path-per-pns model. Under this model, it seems there's no better design than what we currently have.

I don't see any new issue or concern raised here.

The present solution introduces unnecessary issues, such as ACK ambiguity, RTT estimation, large deviation from QUIC. Even worse, some algorithms that rely on ACK from the same path might not perform well.

If we properly design the conceptual model, we can find a simpler and more feasible design that avoids these extra problems.

iyangsj commented 9 months ago

CASE A:

CASE-A

The simplest implementation is to compute smoothedRTT and RTTvar per Section 5.3 of [QUIC-RECOVERY] regardless of the path through which MP_ACKs are received. This algorithm will provide good results, except if the set of paths changes and the ACK_MP sender revisits its sending preferences.

iyangsj commented 9 months ago

CASE B:

CASE B

The simplest implementation is to compute smoothedRTT and RTTvar per Section 5.3 of [QUIC-RECOVERY] regardless of the path through which MP_ACKs are received. This algorithm will provide good results, except if the set of paths changes and the ACK_MP sender revisits its sending preferences.

mirjak commented 1 month ago

Consideration for ACK scheduling and impact on congestion control is discussed in 6.3. Computing Path RTT. Further issue #77 notes that it might be useful for a future extension to provide guidance on ACK scheduling (frequency as well as which path to use). However, it was decided that this can be left as an extension. Therefore I'm closing this issue now. Thanks for the detailed discussion!