Closed dhobsd closed 1 year ago
That's an interesting but sometime complex question. The stack has to work-out what to do, to me the API helps by providing some additional context, that can be used - not a specific behaviour. Do you have a proposal for more that we can say?
To answer part of my own question:
How does one specify TCP keepalive retry policies? (Typically TCP keepalives retry some number of times before giving up.)
This would rely on setting connTimeout
.
Regarding what more to say, I think this depends on answers to some questions that I think are kind of ambiguous. I have recommendations on direction at the end of this comment.
The text for keepAlive
says:
Note that when supported, the system will use the default period for generation of the keep alive-packets.
This seems somewhat at odds with TCP:
If keep-alives are included, the application MUST be able to turn them on or off for each TCP connection (MUST-24), and they MUST default to off (MUST-25).
If I'm a networking software author, per the above, how do I express a preference for protocols that have their own keepalive timeouts while also expressing a preference for enabling or disabling TCP keepalives?
If I'm a TAPS system implementer, and I get a Preconnection
where keepAlive
is set to Prohibit
, but keepAliveTimeout
is not specified, am I allowed to choose HTTP/2 (and not participate in PING
)? Or anything on top of TCP? What if I'm implementing a transport on top of multiple TCP connections?
If I'm a TAPS system implementer, and I get a Preconnection
where keepAlive
is set to Require
, am I allowed to choose HTTP/1.1 as long as the underlying TCP connection can have keepalive enabled?
I can allow folks to solve this problem with non-standard connection properties, but given the MUST
in TCP for application authors, I feel like we probably ought to specify how and where these options take effect. I get that we have options for setting UTO, but this is distinct from TCP keepalives. [UTO says]:
[I]f a connection that enables keep-alives is also using the TCP User Timeout Option, then the keep-alive timer MUST be set to a value larger than that of the adopted USER TIMEOUT.
As a networking software author, how am I to know what the keep-alive timer is set to so that I can constrain this value?
At a minimum, I think the way that these properties are intended to apply to protocol selection is necessary. The wording I'd suggest I guess depends on the answer to the questions around how keepAlive
interacts with Require
and Prohibit
.
I think it makes sense to have TCP-specific Connection Properties for observing and controlling TCP keepalives separately from the keepAlive
/ connTimeout
/ keepAliveTimeout
Properties. I would recommend adding something like:
tcpKeepAliveEnabled
tcpKeepAliveInterval
tcpKeepAliveCount
Including these properties solves several problems:
Require
and Prohibit
for keepAlive
. Prohibit
with tcpKeepAlive
false
would happily choose HTTP/1.1 over TCP. Require
with tcpKeepAlive
false
would happily choose HTTP/2 over TCP.Outstanding questions:
keepAlive
to specify its behavior in the case of Require
and Prohibit
? There are a number of interacting options here, and how it behaves with them is complex.Connection
that was instantiated from a Preconnection
where keepAlive
is Prohibit
?Connection
that was instantiated from a Preconnection
where keepAlive
is Require
?Oh, wow... it seems I brushed mandatory-to-support-as-app-controlled keep-alives off the table in RFC 8303, with: "The requirements for Internet hosts [RFC1122] also introduce keep-alives to TCP, but these are optional to implement and hence not considered here. " I think this is a mistake that happened to me in RFC 8303. Sorry :-(
Many thanks @dhobsd for filing this very detailed and thoughtful issue! I believe that these are important things to resolve.
I see two things here:
I believe that we can and should solve some of 1) irrespective of 2), and we could then move on to discuss whether 2) would be good to do in addition. So, I propose to focus on 1) first. I remove much here as I quote, as an effort to simplify the discussion a little and take this step by step.
In my answer, below, "I" am a networking software author, whereas "you" are a TAPS system implementer.
Regarding what more to say, I think this depends on answers to some questions that I think are kind of ambiguous. I have recommendations on direction at the end of this comment.
The text for
keepAlive
says:Note that when supported, the system will use the default period for generation of the keep alive-packets.
This seems somewhat at odds with TCP:
If keep-alives are included, the application MUST be able to turn them on or off for each TCP connection (MUST-24), and they MUST default to off (MUST-25).
I don't see why this is at odds? Turning them on via the keepAlive property means to do "on" for MUST-24.
If I'm a networking software author, per the above, how do I express a preference for protocols that have their own keepalive timeouts while also expressing a preference for enabling or disabling TCP keepalives?
The way the text is written, "whether the application would like the Connection to send keep-alive packets or not", this does not mean: "I want to have a Connection that will, in the future, allow me to control keep-alives", but it means: "I want a Connection that sends keep-alives" (i.e., I want it to be supported AND I want it to happen). Most selection properties are that way (perMsgReliability is a counter-example).
So, my understanding from this text is: if I want to have keep-alives, I just use this, and I don't need to tweak the interval. This,however, is at odds with the Default value of keepAliveTimeout
being Disabled
.
So, PR #1444 fixes this by making the default "System default" and also defining this as a value that can be chosen.
For everything below, let's keep in mind: the "application" from TCP's point of view is whoever uses TCP - and so, in my understanding (I'd be happy to stand corrected!!), the MUSTs from the TCP spec apply when the TAPS system maps the application directly to TCP (at best: the "application" from the PoV of the TCP spec could also be the TAPS system itself), but when the TAPS system offers a HTTP/TCP Connection, these MUSTs don't apply to the application that uses the TAPS API.
If I'm a TAPS system implementer, and I get a
Preconnection
wherekeepAlive
is set toProhibit
, butkeepAliveTimeout
is not specified, am I allowed to choose HTTP/2 (and not participate inPING
)? Or anything on top of TCP?
Why not? The point is that the Connection (whatever it consists of) wouldn't send keep-alive packets. So, you'd also have to be sure that the HTTP/2 implementation doesn't enable TCP's keep-alives.
What if I'm implementing a transport on top of multiple TCP connections?
If none of them enable keep-alives, then all is well.
If I'm a TAPS system implementer, and I get a
Preconnection
wherekeepAlive
is set toRequire
, am I allowed to choose HTTP/1.1 as long as the underlying TCP connection can have keepalive enabled?
Why not? (it would have to be more than "can have", though: you'd have to make sure that it is enabled)
Before getting into the TCP-specific options, let's see if this already resolves your concern... perhaps it does? After all, you say:
At a minimum, I think the way that these properties are intended to apply to protocol selection is necessary. The wording I'd suggest I guess depends on the answer to the questions around how
keepAlive
interacts withRequire
andProhibit
.
Regarding the open questions - in my interpretation:
Outstanding questions:
- Should the behavior for multi-streaming protocols be specified? I.e. if a protocol sits on top of multiple TCP connections, ought we specify that these keepalives (and UTO) applies across all of the underlying streams? (It seems obviously so to me, but I'm not sure it is generally obvious.)
The specified behavior applies to a capital-C-Connection, whatever this Connection consists of. I.e., if it's only a stream that's part of a multi-streaming protocol, that stream should see keep-alives (well it would, as they apply to all streams). We could get super-picky and ask: would 5 Connections which are streams of one e.g. QUIC connection not require 5 instead of 1 keep-alive packets? but I think this is not about the number of packets... it's about 1) using any keep-alives at all or not, and 2) being able to dictate a frequency. So, nothing extra to specify IMO.
- Should we extend the text of
keepAlive
to specify its behavior in the case ofRequire
andProhibit
? There are a number of interacting options here, and how it behaves with them is complex.
Not so complex and nothing to do if you agree with what I wrote above about Require
and Prohibit
:-)
- Do we need to specify that it's an error to enable TCP keepalive or set its associated properties on a
Connection
that was instantiated from aPreconnection
wherekeepAlive
isProhibit
?
No, because that's completely clear - why wouldn't it be? Prohibit
means I don't want keep-alives. If you turn them on, well, you break that contract.
- Do we need to specify that it's an error to disable TCP keepalives on a
Connection
that was instantiated from aPreconnection
wherekeepAlive
isRequire
?
Same as above: why isn't this obvious?
I don't see why this is at odds? Turning them on via the keepAlive property means to do "on" for MUST-24.
I think this makes sense when the perspective is that TAPS is "the application", as you mentioned. From the perspective of the RFC, I imagine that whoever is on the other end of the API providing the TCP connection is "the application". When a Connection yields a transport that is purely TCP/IP, I agree that the keepAlive
property would apply to TCP.
When the transport protocol stack is anything on top of TCP, I'm still unsure where to expect the keepAlive
property to apply. I'm dubious that it should apply to TCP with the same timers across protocols that support such a behavior. More below.
If I'm a TAPS system implementer, and I get a
Preconnection
wherekeepAlive
is set toRequire
, am I allowed to choose HTTP/1.1 as long as the underlying TCP connection can have keepalive enabled?Why not? (it would have to be more than "can have", though: you'd have to make sure that it is enabled)
Because TCP keepalives generally aren't delivered past the network stack and thus do not impact server timers for connection keepalive in higher-order protocols. I agree that the property means "I want a Connection that sends keep-alives", but I would also expect it to apply to the highest level transport. Any protocol providing its own host-initiated keep-alive on top of TCP obviates the need for TCP keep-alives. (This is distinct from any protocol providing keep-alives; if these are remote-initiated and a connection tracking middlebox loses state, this connection can then only be recovered with TCP keepalives.) Because of these distinctions, there may be a strong desire to control these separately.
Regarding the open questions - in my interpretation: [...]
This all seems fine to me, except:
Not so complex and nothing to do if you agree with what I wrote above about
Require
andProhibit
:-)
I don't think I fully agree because of the HTTP/1.1 case I mentioned above.
Sigh.... it seems that the crux of the matter is this:
Because TCP keepalives generally aren't delivered past the network stack and thus do not impact server timers for connection keepalive in higher-order protocols.
That's a vertical communication problem, and bad design IMO. We should relay information about keep-alives vertically across all protocols so they don't need to be (or: mistakenly are) repeated at multiple layers.
Since we're not in a position to change all the protocol APIs.... I don't know what to do now.
My personal opinion is that there are three goals of a keep-alive:
1) To keep the transport open when it would otherwise be idle 2) To repair lost connection tracking state 3) To terminate idle connections that also seem to be plagued by network partitions
Since keep-alives can't generally be relied upon to keep higher-level protocols alive, it seems that this option ought to apply at the highest possible level.
Issues that prevent keep-alive having useful properties at higher layers (e.g. H2 data channels all deadlock, but ping channels work fine) still aren't really impacted by enabling it at lower layers, and still require applications to have robust timeout policies for data channels.
As an application developer, I would be surprised if I set keepAlive
Require
and got an H2 transport that did not do PING
, but TCP keepalives were on.
We should relay information about keep-alives vertically across all protocols so they don't need to be (or: mistakenly are) repeated at multiple layers.
Can you expand on what you mean here? I'm not clear how this could solve both repeating or avoiding multi-layer keepalives. I don't think the TAPS system can know the developer's intent. I think the intent is generally going to be "at application layer" and "maybe in TCP", and probably just PING channels for QUIC.
If the application is just using TCP as a transport, then there ought to not be much of a difference. Otherwise, I think they should both be separately controllable unless we want to start talking about how TAPS deals with policy, which seems to have been a big avoid.
I wonder if this looks like: 1 ought to trigger based on inactivity of the sender - so if you do have an upper-layer that also does this, it can prompt the lower layer.. 2,3 in your list above look like they could be more upper protocol-specific.
You say: "As an application developer, I would be surprised if I set keepAlive Require and got an H2 transport that did not do PING, but TCP keepalives were on." and I think a TAPS system could construct/configure a stack that does do what you wish, but if there are specific requirements, the that needs protocol-specific tuning.
There could be cases where you really don't want these features (i.e. where the protocol can't make the correct choice?) - to tune that protocol for the specific application - developers can always override the defaults for protocols, but the more protocol-specific params they set, the less automated the overall stack will be and they need to continue to maintain these protocol-specific parameters.
Hi,
Just a clarification:
We should relay information about keep-alives vertically across all protocols so they don't need to be (or: mistakenly are) repeated at multiple layers.
Can you expand on what you mean here? I'm not clear how this could solve both repeating or avoiding multi-layer keepalives. I don't think the TAPS system can know the developer's intent. I think the intent is generally going to be "at application layer" and "maybe in TCP", and probably just PING channels for QUIC.
This was just a side comment that wasn't constructive(my comment) - sorry! I was just thinking about protocol design more generally. Let's not go there, it doesn't help solve this issue. (if you really do want to discuss this, send me an email - but let's keep this github issue clean ;-) ).
1 ought to trigger based on inactivity of the sender - so if you do have an upper-layer that also does this, it can prompt the lower layer..
As long as the layering represents some form of encapsulation or transport, upper-layer keep-alives will implicitly deal with idle timeouts at the lower layers.
It is sometimes the case that lower-layer idle timeouts are stricter than higher-layer timeouts. For example, SSHing through some architecture with a NATing load balancer that sets the TCP idle timeout to 10 minutes -- OpenSSH's default idle timeout is 15 minutes. When these timers are controlled by the remote endpoint, a TAPS system can't make relevant decisions and it is up to the programmer or administrator to choose appropriate settings.
Note that in this case it would still be appropriate to tune the highest-layer keep-alive interval.
I think a TAPS system could construct/configure a stack that does do what you wish
My concern is that the language right now allows a system to do the thing that I can't imagine wanting.
There could be cases where you really don't want these features
It seems like this case is easily handled with keepAlive
set to Prohibit
. I'm not too worried about this.
developers can always override the defaults for protocols
I imagine based on some of the interim meeting discussions of the past, you're expecting this would be achieved by the developer falling back to system-specific APIs. Is that correct? Non-theoretical capability-based systems exist where a transport channel might not confer rights to setting lower-layer properties on that channel. In such cases, it isn't necessarily true that developers can override the defaults.
In discussion, the authors lean towards recommending that if you want to configure separate protocols independently within the stack, you should configure protocol-specific properties. The top level options can be picked up by any of the protocols.
In discussion:
We aren't enumerating all possible protocol-specific properties; implementations can add these
The top level protocols receive the general connection-wide properties, can interpret them, and then choose what set of options/properties are sent to the lower options. Thus H2 gets to interpret the general keepalives, and decide if it passes that through to TCP (assuming no protocol-specific options are set directly by the application).
Thanks, I'm happy with this wording, and I appreciate the example.
When a protocol stack contains multiple protocols that support keepalive messages (e.g. h2 on TCP), where are keepalives enabled?
When one would like different keepalive policies for the different bits of the stack, how is this specified?
How does one specify TCP keepalive retry policies? (Typically TCP keepalives retry some number of times before giving up.)