tlswg / draft-ietf-tls-esni

TLS Encrypted Client Hello
https://tlswg.github.io/draft-ietf-tls-esni/#go.draft-ietf-tls-esni.html
Other
229 stars 56 forks source link

Abort on duplicate OuterExtensions #514

Closed dennisjackson closed 3 years ago

dennisjackson commented 3 years ago

If OuterExtensions contains duplicate references to the same extension in the ClientHelloOuter, a draft-compliant implementation may create a ClientHelloInner many times larger than the received ClientHelloOuter. In split mode, this allows for network DoS via packet amplification. In shared mode, this could lead to memory or CPU exhaustion.

This PR requires implementions to reject OuterExtensions which reference the same extension more than once. This can be achieved in practice by using the algorithm suggested in Appendix B.

davidben commented 3 years ago

This used to be in there and then was removed in https://github.com/tlswg/draft-ietf-tls-esni/pull/467 (@cjpatton). It's a little weird because the algorithm in the appendix does not actually check for duplicate extensions, merely that you never try to copy the same extension twice. It only implicitly checks for duplicate extensions if you had previously checked the ClientHelloOuter for duplicate extensions.

How we want to reflect this in the spec, I'm not sure. Our implementation always checks for duplications, so I don't have much of a horse in this race. But I think many don't (it requires a sort or so), and I do feel that you should be able to apply the appendix and not think about it so hard.

The current text, without this PR, says it's not a strict requirement on the parsing side, but that you should avoid DoS. I think those do imply you need to do something here, but it is unfortunate that it doesn't call out this case. I suppose you could argue that a duplicate is violating the order rule.

chris-wood commented 3 years ago

I think those do imply you need to do something here, but it is unfortunate that it doesn't call out this case.

Right, so, the simplest thing seems to just be to address the case directly, no?

davidben commented 3 years ago

Well, I think the question is whether this is compliant:

This has no DoS issues, but will accept a duplicate if, say:

This'll copy the first A and then the second A. There are no DoS problems (the same extension is never copied twice), but it does not always reject duplicates. Is that compliant?

chris-wood commented 3 years ago

Putting compliance aside, let me flip this around -- what is the benefit of allowing duplicates? Rejecting them seems incredibly trivial.

sftcd commented 3 years ago

On 09/08/2021 20:35, Christopher Wood wrote:

Putting compliance aside, let me flip this around -- what is the benefit of allowing duplicates?

Slightly cooler than more obvious covert channels? :-)

Rejecting them seems incredibly trivial.

Yep. I think it's fine either left alone or specified. The only reason I can think of to specify it is that a client could test whether or not a server does reject. If adding new text just make it a MUST requirement, no need to say HOWTO.

Cheers, S.

cjpatton commented 3 years ago

I'd be fine with a change here, but let's make sure the algorithm in the appendix actually checks for duplicates.

davidben commented 3 years ago

what is the benefit of allowing duplicates? Rejecting them seems incredibly trivial.

Less code, for simpler implementations that don't want to pay for things they don't need? I do not believe RFC8446 requires that you check for it. Rejecting duplicates of extensions you know about is easy (it's a bitmask). Rejecting them of extensions you don't know about requires a sort, or building a hash table or so. We do this, but I've not seen any other implementation that does it.

I'd be fine with a change here, but let's make sure the algorithm in the appendix actually checks for duplicates.

Right, that's the thing I am not okay with. The algorithm in the appendix is duplicate-check-preserving, and it does no one any good to check for duplicates in the algorithm:

This means the only useful place to apply this check is the ClientHello parser, RFC8446. But we haven't tried to get consensus on mandating ClientHello parsers check for ordered extensions. That would make a bunch of TLS implementations non-compliant. Thus I think the most pragmatic solution is to omit this requirement in ECH.

sftcd commented 3 years ago

I think David is correct. OpenSSL IIRC checks for dupes in the existing non-ECH ClientHello handling code. Adding a requirement to barf on dups in outers is ok for me (because it'll just happen), saying HOWTO is likely something I'd ignore as you're unlikely to stumble on the exact way that OpenSSL does it.

chris-wood commented 3 years ago

This means the only useful place to apply this check is the ClientHello parser, RFC8446. But we haven't tried to get consensus on mandating ClientHello parsers check for ordered extensions. That would make a bunch of TLS implementations non-compliant. Thus I think the most pragmatic solution is to omit this requirement in ECH.

I agree with the decision tree, but I don't find this a compelling reason to not prohibit them. We're already changing stacks to accommodate ECH.

(As a side question: has anyone really looked at what can happen if certain TLS extensions are repeated?)

chris-wood commented 3 years ago

@dennisjackson given the analysis above, how about we close this an open an issue against rfc8446bis to track extension duplication detection/rejection?

davidben commented 3 years ago

(As a side question: has anyone really looked at what can happen if certain TLS extensions are repeated?)

I guess it depends on what you do. The duplicate will be present in the transcript, so...

Pragmatically, I think rejecting duplicate extensions you already know how to parse is definitely a good idea. It costs next to nothing because you just maintain a bitmask, and reduces some risk. Doing it for extensions you don't recognize is a bit more fuss.

Given a time machine, I think we should have mandated all extensions be sorted, so duplicate-checking is easy. Too late now. :-(

chris-wood commented 3 years ago

If the server gets confused and mixes the fields together, then it depends on whether that gets the server in a state that it couldn't have gotten into before. If not, no harm. If it does, that novel state might do something weird and untested.

This is the scenario that concerns me most, sort of like checking for certain extensions on CH1 in HRR but not on CH2. In any case, I think it's safe to say this is external to ECH, as much as I would prefer we just do it here.

dennisjackson commented 3 years ago

Most of the discussion has veered away from the point of this PR, unfortunately perhaps lead astray by the renaming of the PR. All the discussion about duplicate extensions (or not) in TLS1.3 is not relevant to the property we need to enforce.

The intended goal is to ensure that after decompression, the resulting ClientHelloInner must be equal or smaller in size than the received ClientHelloOuter. I believe this is an important security property which must be enforced by ECH. Otherwise, an attacker can send small CHOs to the client-facing server, which then must deliver CHIs potentially >1000x larger to the backend server.

Regarding the phrasing of the requirement:

[Bail if...] Any extension is referenced in OuterExtensions more than once.

This does not require implementations to reject duplicate extensions in CHs and as @davidben noted, whether they do or not doesn't matter, as this requirement still ensures the correct security property. If you remove this requirement, regardless of whether duplicate extensions are tolerated, there's an amplification attack.

I did consider two alternative formulations

The first option puts the burden entirely on the implementor and seems prone to mistakes. The second seems hard to make precise. Hence this formulation.

davidben commented 3 years ago

Ohhhh! I think I see. Yeah, that is actually slightly different from what we removed in #467. Sorry, I did not notice this subtlety. Let me try to restate this to make sure we're on the same page:

467 removed the sentence "OuterExtensions contains duplicate values", which means that if the same extension type appears twice in OuterExtensions, the server is obligated to reject.

This PR, however, says "Any extension is referenced in OuterExtensions more than once", which one could interpret to not mean the extension type, but the extension itself. And so we're saying that you cannot copy a given Extension (as in the extension_type/extension_data tuple) twice.

Is that the idea? If so, I like that. It handily punts the RFC8446 problem. :-) I'm not sure how obviously this subtlety comes off in the text, but I don't have a better idea either.

chris-wood commented 3 years ago

Is that the idea? If so, I like that. It handily punts the RFC8446 problem. :-)

👍 🤦

dennisjackson commented 3 years ago

I was looking through previous PRs and it looks like this has been discussed before:

I think it makes sense to add @davidben's paragraph on the attack to the security considerations section to explain the requirements around decompression better.