Closed rmounce closed 6 years ago
Ok, cool. Another concern is that I don't think it's enough to check the right side. If 1st packet ACKs A and second packet ACKs C but not A, then the check will succeed and the first packet will be dropped, losing information. So I think we need to check the whole range...
I'm wondering if it isn't easier to just skip over sacks entirely?
Not sure I follow. The second packet’s SACK option would have two blocks covering A and C, unless A has been received in the meantime.
Ryan Mounce notifications@github.com writes:
Not sure I follow. The second packet’s SACK option would have two blocks covering A and C, unless A has been received in the meantime.
Yeah, I was thinking on the second case. Or in general, that we cannot assume that SACKs are cumulative in that sense, since they can ACK arbitrary blocks of the sequence number space, right?
-Toke
I've completely reworked this patch to be as strict as imaginable :)
You’re right. If the new SACK differers only by adding a new block and the rest of the blocks remain the same, the strict check won’t pass and the old redundant SACK will also be sent. This won’t cause any issues though, and it is a relatively infrequent event with little to gain with filtering.
The stricter check was a last minute addition to ensure that duplicate SACKs won’t be filtered.
If you instead mean cases where there are multiple separate SACK options, then my understanding is that this is not RFC compliant and that all blocks must be within the same SACK option.
Ryan Mounce notifications@github.com writes:
You’re right. If the new SACK differers only by adding a new block and the rest of the blocks remain the same, the strict check won’t pass and the old redundant SACK will also be sent. This won’t cause any issues though, and it is a relatively infrequent event with little to gain with filtering.
I was talking about this case. Isn't the addition of a new block the common case? So if we don't handle that, why parse SACKs at all?
The common case is the loss of a single packet or burst loss of consecutive packets, the SACKs resulting from each case are safely filtered as much as possible.
If there is a second loss event before the first has been recovered, an additional block is added to the SACK. This isn’t quite as common.
A - ACK 500, SACK 1000-1500 (packet with 1500-2000 goes missing) B - ACK 500, SACK 1000-1500, 2000-2500 C - ACK 500, SACK 1000-1500, 2000-3000
Each successive SACK here makes the previous one redundant. Only A won’t be filtered even though it could be safely. B will still be filtered in response to C and so on for the next however many tens or hundreds of packets, thus SACK filtering remains 99% as effective as is safely possible in this corner case.
I’m on mobile so not able to convey myself as effectively as I’d like. The case I have in mind here is high BDP flows with low but non-zero packet loss, where without proper handling of SACKs the ACK filter will regularly ‘burp’ and exacerbate throughout issues.
Right, fair enough.
Last potential issue: can the cumulative ACK counter increase while less SACK information is available?
Right now we only check SACK options when the ACK counters are the same; I'm trying to convince myself that this is safe, but I'm not sure that I'm succeeding...
I think it’s technically permissible, but I’ve never seen this behaviour in any SACK implementation.
If it makes you feel better -SACKs provide additional information, and TCP is designed to be robust even in cases where the SACK option is stripped -We are optimising so that we can drop outdated SACKs from the head here. If we weren’t, new SACKs would be getting indiscrimately dropped from the tail instead! -Even if we don’t drop it, something else further along in the path could drop it instead.
:)
Ryan Mounce notifications@github.com writes:
I think it’s technically permissible, but I’ve never seen this behaviour in any SACK implementation.
Well, better safe than sorry. I reworked the check slightly to catch this case, and also handle the case where a new SACK range is added...
I’ve only skimmed before going to sleep but it looks good. Thanks for your diligent ownership of cake throughout the upstreaming process!
Ryan Mounce notifications@github.com writes:
I’ve only skimmed before going to sleep but it looks good.
Awesome, thanks! We'll see what upstream thinks ;)
Thanks for your diligent ownership of cake throughout the upstreaming process!
You're welcome! Thanks for stepping up to fix the ACK filter :)
This turned out to be simpler than expected, and hopefully addresses Eric's concerns. Rationale in commit message.
Compile and run tested on amd64, however I haven't had time to reproduce the pathological case.