Open bemasc opened 5 months ago
That seems.... complicated;-)
I dunno if it's safe to assume that information about what to put in HTTPS RRs always flows from outside to inside like that. And thinking of e.g. haproxy as the ECH terminator it could support something (e.g. h3+ECH perhaps when someone does that) but would only enable that for some backends, so not sure that https://ech.terminator.example/.well-known/origin-svcb would work.
Part of me wants to try just do the minimum that'd work for some simpler cases that might be used by small hosters so that they can get into the ECH game, but to do that in a way that could be extended later to something more generic like the above, e.g. in a -bis RFC. (That also touches on #14 too of course.)
thinking of e.g. haproxy as the ECH terminator it could support something (e.g. h3+ECH perhaps when someone does that) but would only enable that for some backends, so not sure that https://ech.terminator.example/.well-known/origin-svcb would work.
I'm not sure what you mean. The ECH terminator would expose all its capabilities on its own origin, and the the backends would subset those capabilities when publishing the origin-svcb for themselves. If the ECH terminator changes which capabilities are allowed for different origins, it would need separate origins to represent those separate capabilities.
Part of me wants to try just do the minimum
I think we're in agreement here. The point of this architecture is that the ZF only speaks to one source of truth, and doesn't merge configurations from disparate parties. Normatively, this whole architecture can probably be reduced to one sentence: "If the origin makes use of intermediaries, it is the origin's responsibility to ensure that the origin-svcb JSON document correctly accounts for their current configuration.".
I'm not sure what you mean. The ECH terminator would expose all its capabilities on its own origin
A backend that does support h3 wouldn't know (for sure) if the ECH terminator will/won't proxy h3 for it. With haproxy that'd (IIUC) be down to the specifics of the haproxy config, and haproxy has a v. rich config language.
Or say the ECH terminator has 2 IPv4 addrs and one supports h3 while the other doesn't (for some UDP blocking reason)?
I'm not sure the "I get to do an automatically detectable proper subset of what the upstream guy can do" thing applies in general. For split-mode ECH though, such a setup would work for ECHConfigs.
If the HAPROXY origin-svcb says it supports H3, then it supports client->proxy H3. (proxy->backend is a separate issue.) If it has different configurations for different backends, then it would need to separate those configurations into distinct origins.
Similarly, two IP addresses configured differently would need to be represented by separate HTTPS records, and hence separate entries in the "endpoints" array.
I think the subset logic works, but we don't have to specify it here. It's sufficient to be clear that as far as the ZF is concerned, only the origin's observable origin-svcb document matters, and how that's generated is not currently specified.
If the HAPROXY origin-svcb says it supports H3, then it supports client->proxy H3. (proxy->backend is a separate issue.) If it has different configurations for different backends, then it would need to separate those configurations into distinct origins.
Similarly, two IP addresses configured differently would need to be represented by separate HTTPS records, and hence separate entries in the "endpoints" array.
I'm not sure of the above TBH, yes the upstream entity could create different origins (i.e. names for which it has webPKI certs) for those different things, but it seems unlikely to me.
I think the subset logic works, but we don't have to specify it here. It's sufficient to be clear that as far as the ZF is concerned, only the origin's observable origin-svcb document matters, and how that's generated is not currently specified.
I do however agree with the above, luckily:-)
I think we can also safely say that it'd be ok for an ECH terminator to publish an origin-svcb for the public_name that includes the latest ECHConfig and for backends "behind" that to automatically poll-for and use that ECHConfig in their own origin-svcb JSON. Whether such backends can automatically make use of other bits of the ECH terminator's origin-svcb JSON is less clear, and for future study.
The draft right now is pretty vague about how intermediaries are supposed to work. I think we need to get a lot more specific.
Right now I know of 5 kinds of relevant intermediaries:
Right now, my best idea is to employ the following rules:
Suppose we have a complicated case: tcp.load.balancer.example -> ech.terminator.example -> http.gateway.example -> origin.example. This would work as follows:
The TCP load balancer would indicate that it does not support HTTP/3. https://tcp.load.balancer.example/.well-known/origin-svcb:
The ECH Terminator supports HTTP/3, but it would inspect the above, see that only HTTP/1.1 and HTTP/2 are supported, and remove any mention of HTTP/3.
https://ech.terminator.example/.well-known/origin-svcb:
The HTTP gateway would inspect the above and add any relevant parameters that are true across this gateway configuration. It would respect the regeninterval by periodically fetching the above JSON and regenerating its own JSON.
https://http.gateway.example/.well-known/origin-svcb:
Finally, the origin would do the same with the Gateway's JSON, adding any information it knows can safely be added: https://origin.example/.well-known/origin-svcb:
The zone factory would be configured with the name "origin.example" and A/AAAA records for that name that correspond to the TCP load balancer. It would use those IPs to request this last JSON file and convert it into a DNS record:
Upsides:
Downsides:
Questions: