Closed MikeBishop closed 4 years ago
I'm thinking that the answer is "no" here as well. Developing a way to provide supplementary annotations on links would be less reliable, but ultimately more likely to succeed.
I agree with the opinions about new scheme(s). However, it does seem a shame to have this implicit dependency on Alt-Svc delivered by HTTP. By my understanding of Alt-Svc, the origin would have the requirement of offering resources via HTTP(/TLS)/TCP in perpetuity. That seems a bit unfair to me.
I recall some talk about TCP fallback (e.g. at the QUIC BoF IETF 96). No text currently in WG docs seem to require this though. the closest is in the HTTP/QUIC mapping:
Connectivity problems (e.g. firewall blocking UDP) may result in QUIC connection
establishment failure, in which case the client should gracefully fall back to
HTTP/2.
If TCP fallback is not actually required, and a solution can be found to directly open QUIC connections then there is a route to deprecate HTTP(/TLS)/TCP. Similarly, constrained devices that want to operate without HTTP(/TLS)/TCP could do so, bearing the risk of %N connection failure rate.
Forgive this stupid question: is there text somewhere in an RFC that requires a client accessing resources with a https
scheme to use TCP? I appreciate the scheme is described in terms of TCP and port 443 but wonder if there is anything preventing a client from trying their luck on opening a QUIC connection whenever it sees a https
scheme? ( a more sensible whitelist approach could also be taken). Perhaps this is more suited to an implementation debug feature, which supports Mike's testing ideas.
Forgive this other stupid question: what about other application-layer protocols over QUIC? Do they also have an implicit dependency on Alt-Svc, or is that totally inappropriate?
Other protocols have their own bootstrapping problems. Too much depends on context to be sure. For instance, migrating something like FTP might be tricky and something akin to Alt-Svc might be the most practical approach. On the other hand, migrating RTP probably won't have problems in this area because it uses a signaling protocol for setup (RTP would have a host of other problems, of course).
On Tue, Jan 31, 2017 at 10:06 PM, Mike Bishop notifications@github.com wrote:
Should we mint new scheme(s) that allows direct reference to a resource served exclusively over HTTP/QUIC?
I think the answer here is still no for scheme - we don't want 2 different urls for resources that are supposed to be interchangable (and then the caching rules are impacted, etc..).
But something like an authenticated SRV is an obvious path to go down eventually.
We already have different URLs for things that might/might not be interchangeable. When you use Alt-Svc between an http:// origin and an https:// endpoint, you're declaring that they're either interchangeable or you can properly process the distinction.
I can envision several scenarios where either server or client won't want to carry a full HTTP/TLS/TCP stack simply for bootstrapping, when they already know both peers will support HTTP/QUIC. Maybe authenticated SRV is the path forward, but it seems like the simplest would be something like:
HTTP/QUIC is differentiated from HTTP and HTTPS URIs by using the 'httpq' protocol identifier in place of the 'http' or 'https' protocol identifier. An example URI specifying HTTP/QUIC is
httpq://www.example.com/~smith/home.html
. Origins which serve the same content over HTTP/QUIC and HTTPS SHOULD provide an Alt-Svc header on the HTTPS endpoint declaring that the resource can be obtained over QUIC as well, and SHOULD NOT reference URIs with the 'httpq' scheme in responses to 'http' or 'https' requests. Such origins MAY consider 'https' and 'httpq' to be equivalent while processing requests.
Note that I don't expect this to be used in browser-land anytime soon, if ever. httpq:// would be inaccessible to legacy browsers, and you'd be cutting off a substantial portion of the web from following the link. However, I think for non-browser scenarios and for testing, there should be a way to explicitly describe a QUIC endpoint.
Oh, and @LPardue: RFC 2818 says in Section 2.3:
When HTTP/TLS is being run over a TCP/IP connection, the default port is 443. This does not preclude HTTP/TLS from being run over another transport. TLS only presumes a reliable connection-oriented data stream.
RFC 7230 updates this by saying:
The "https" URI scheme is hereby defined for the purpose of minting identifiers according to their association with the hierarchical namespace governed by a potential HTTP origin server listening to a given TCP port for TLS-secured connections ([RFC5246]).
All of the requirements listed above for the "http" scheme are also requirements for the "https" scheme, except that TCP port 443 is the default if the port subcomponent is empty or not given....
Also in RFC 7230:
Although HTTP is independent of the transport protocol, the "http" scheme is specific to TCP-based services because the name delegation process depends on TCP for establishing authority. An HTTP service based on some other underlying connection protocol would presumably be identified using a different URI scheme....
@MikeBishop thanks for these, really interesting food for thought
On Wed, Feb 1, 2017 at 7:11 PM, Mike Bishop notifications@github.com wrote:
We already have different URLs for things that might/might not be interchangeable. When you use Alt-Svc between an http:// origin and an https:// endpoint, you're declaring that they're either interchangeable or you can properly process the distinction.
Alt-Svc does not contemplate scheme or origins - it deals with routing and protocol. Alt-Svc cannot change something from http:// to https:// (or vice versa) nor does it imply anything about whether the content of those urls differs if only the scheme is different.
I would say HSTS does come closer to what you're describing - but not quite. It does nicely illustrate the problem of determining equivalence (sometimes they are, sometimes they aren't) and for me is a pretty good reason to steer clear. Things white and black lists that https everywhere needs to deal with are another example.
I think quic would be much better off if it could just stick to the https:// train.
what if we added some 'prior knowledge' language here ala h2?
I can envision several scenarios where either server or client won't want to carry a full HTTP/TLS/TCP stack simply for bootstrapping, when they already know both peers will support HTTP/QUIC. Maybe authenticated SRV is the path forward, but it seems like the simplest would be something like:
well, if they don't have a tcp stack, then they don't have to worry about fallback.. so why not just try quic? I guess you don't know what versions to try, but they are unlikely to be encoded in the scheme either..
On 2 Feb 2017 8:02 a.m., "Patrick McManus" wrote:
well, if they don't have a tcp stack, then they don't have to worry about fallback.. so why not just try quic? I guess you don't know what versions to try, but they are unlikely to be encoded in the scheme either..
But is that kind of behaviour prohibited by the sections of RFC 7230 that Mike quoted?
I'm thinking about this in terms of clients that don't have tcp support. If we're really talking about origins that don't have tcp support instead, then I think a new scheme makes more sense.
On Thu, Feb 2, 2017 at 10:47 AM, Lucas Pardue notifications@github.com wrote:
But is that kind of behaviour prohibited by the sections of RFC 7230 that Mike quoted?
I tihnk 7230 is defining what http and https schemes mean in terms of namespaces and default reachability (which goes back to a origin does indeed need to be able to publish a tcp version in order to use an https scheme, but it don't require all accesses to happen that way)
This is sort of self evident even ignoring quic, we've already got alt-svc changing routes and proxies obscuring DNS and addressing, caches which don't need e2e transport at all , etc.. all of these things get data identified by the same url via mechanisms that are bootstrapped (sometimes) outside of the default interpretation..
I don't think a client that doesn't speak tcp is doing anything wrong by just trying quic on an https url.. A more conservative reading of 7230 might indicate that QUIC for https:// even via alt-svc was non compliant because it wasn't TCP and I don't think any of us believe that we need to update 7230 to allow it.
On Wed, Feb 1, 2017 at 10:19 AM, Mike Bishop notifications@github.com wrote:
Oh, and also in RFC 7230:
Although HTTP is independent of the transport protocol, the "http" scheme is specific to TCP-based services because the name delegation process depends on TCP for establishing authority. An HTTP service based on some other underlying connection protocol would presumably be identified using a different URI scheme....
I read that as requiring a different scheme when the name delegation process was different. I don't see a different name delegation process as probably for names served over QUIC (or least I don't see it as required).
If you dipped your toes into the "special use dns names" discussion, you'll probably also remember that one of reasons TOR wanted .onion was so that the signal that something should be resolved via TOR could be used within the authority section of an HTTPS URL. That really did have a different name delegation process for names below the .onion TLD, but that consideration was ignored in favor of being able to pass "normal" (really normal-looking) URLs around.
That experience hints to me that our minting a new scheme for this will just be ignored in the common case, and I don't see a good reason to generate the potential confusion as a result.
Just my take on it.
In the non-conservative case, this seems to me somewhat of an implementation choice. For a client that wants to retrieve https://www.example.org/example.txt:
Client supports hqm and HTTPS in preference order. Race the hqm and HTTPS connections to the same host:
Client supports hqm only. Tries to connect to host using hqm:
A client that has received Alt-Svc indicating hqm for an alternative that is still fresh:
Client supports hqm and HTTPS in preference order. Tries to connect to the alternative using hqm:
Client supports hqm only. Tries to connect to the alternative using hqm:
I.e. managed networks, whitelists etc
Client supports hqm and HTTPS in preference order. Tries to connect to the host using hqm:
Client supports hqm only. Tries to connect to the host using hqm:
@mcmanus, I'd agree that we don't need to update 7230 to allow QUIC. The authoritative endpoint for an https origin is a TCP port, and Alt-Svc allows that authoritative endpoint to delegate to different endpoints -- other hosts, other ports, other protocols. But the authoritative endpoint is always TCP to the port given in the URL. That's why we're able to use the same scheme and avoid all the branching of stuff underneath it; the origin hasn't changed.
But when there's a service in which the authoritative endpoint is over QUIC -- a device-to-device REST API, or a device's configuration page -- then that requires a different way to express it. I'd be fine with something like https://www.example.com:q443/, except that RFC 2396 restricts the port number to digits only. (It's a little odd, in retrospect, that 2396 describes things in terms of IP and port, with no notion of ports being specific to their transport.) @hardie is right that we're not defining a different name-delegation process here.
I'm leary of saying that clients should (or SHOULD, or even MAY) guess that an origin might be available elsewhere without a way to know that. That's a proposition we rejected when discussing how to find TLS-protected equivalents to http:// origins.
Sure, we could put a checkbox in the UI or a parameter in the config file that says "'https' doesn't mean what you think it means," then couple that with "prior knowledge" language in the spec. But it just seems cleaner to designate a scheme that's semantically equivalent to 'https' except that the ports in the URI are relative to a different transport protocol.
One example of such an "authoritative endpoint is over QUIC" is a QUIC proxy. That is to say, a semantically "HTTP proxy' which one speaks to via QUIC. Chrome today, for example, can be configured to speak QUIC to a proxy by using the "scheme" "quic" in the proxy.pac function.
Why don't you use ALTSVC at the proxy? It's not like proxy setup is a commonplace action. A new scheme isn't something one does lightly.
On Mon, Feb 6, 2017 at 8:12 PM, Martin Thomson notifications@github.com wrote:
Why don't you use ALTSVC at the proxy? It's not like proxy setup is a commonplace action. A new scheme isn't something one does lightly.
(Perhaps I used scheme in the wrong way?) We have support for "proxy", "socks", "socks5" proxy schemes, so adding "quic" was quite straightforward and is not user visible (though it is visible in the pac file, of course). In any case, it's not clear to me that Alt-Svc applies to proxies. As I read the spec, Alt-Svc defines a mechanism for an origin to specify a different server. I don't think of a proxy as an origin, though I guess one could? I'd be curious to hear more about this! That being said, in the context of proxies it seems desirable for users to be able to deploy a QUIC proxy without needing to also deploy an https proxy.
An ALTSVC frame should work for the proxy if the intent was to move the proxy. The ALTSVC frame is processed hop-by-hop. I don't know how well that use case has been tested, but it should be possible to advertise an alternative for the proxy origin.
And as far as the proxy goes, don't you have to deploy a TCP variant for now and into the foreseeable future if you want to have it work? It's just like any other service, I'd imagine, and the h2 server stack isn't that much extra to have.
I thought we had a discussion about Alt-Svc plus proxies which concluded that Alt-Svc is not for finding proxies:
https://github.com/httpwg/http-extensions/issues/62
quoth mnot:
"Yeah. Alt-Sv is for finding an origin, not for finding a proxy -- a proxy might use it, though. This should all be clear based upon reading of RFC7230, but if not we could add a sentence or two to clarify."
So I don't think Alt-Svc applies here.
Is the QUIC proxy in question actually HTTP/QUIC proxy, or a bit more like a SOCKS proxy that would tunnel other protocols over QUIC?
On 8 Feb 2017 00:30, "Ryan Hamilton" notifications@github.com wrote:
I thought we had a discussion about Alt-Svc plus proxies which concluded that Alt-Svc is not for finding proxies:
httpwg/http-extensions#62 https://github.com/httpwg/http-extensions/issues/62
quoth mnot:
"Yeah. Alt-Sv is for finding an origin, not for finding a proxy -- a proxy might use it, though. This should all be clear based upon reading of RFC7230, but if not we could add a sentence or two to clarify."
So I don't think Alt-Svc applies here.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/quicwg/base-drafts/issues/253#issuecomment-278190688, or mute the thread https://github.com/notifications/unsubscribe-auth/AGRFtZZMp1cJhz7Zqv2ZEr0GyUwvjukNks5raQy7gaJpZM4LzLA5 .
It's an "HTTP proxy" that the client speaks to via QUIC, I guess you could say. This is similar to the "https" proxy scheme that chrome supports when it wants to talk to an "HTTP proxy" over a TLS connection (which may result in an HTTP/2 connection to the proxy as the result of ALPN)
Ah so I understand better that there's a slight dichotomy here. If a client is configured to use an HTTPS proxy, and an origin advertises "hq" what proxy should the client use? I don't think it's fair to assume that a single proxy application has to support HTTPS and HTTP/QUIC.
If an HTTPS proxy were to offer an alt-svc itself that points to a standalone HTTP/QUIC-only proxy, then does that upset things when the client comes to try an access a new origin with a https scheme?
I think this is channeling some of McManus' earlier comments. If this is repeating discussion of that old proxy thread then apologies, I'll do some more background reading.
@LPardue, yeah, that's an old discussion. A client makes a decision to use the proxy first, which results in ignoring Alt-Svc. See RFC 7838.
We've talked about four routes here:
1) Long live TCP. All HTTP sites will be dual-stack, and the authoritative endpoint will be TCP. Clients MUST have a way to find the secure delegation from that TCP endpoint to QUIC, though we might define alternatives to Alt-Svc headers which could be done without TCP. (Alt-Svc in DNS?)
2) Update RFC3986. RFC 3986 explicitly states that "The type of port designated by the port number (e.g., TCP, UDP, SCTP) is defined by the URI scheme." We could update the URI to contain a protocol designator, whose default is defined by the URI scheme. As it would be omitted in all existing URIs, their interpretation remains unchanged. Then https://www.example.com:q443/ refers to QUIC on UDP 443.
3) Define a new scheme. See pull request.
4) What's it matter? Assume that HTTP/QUIC on port 443 is likely equivalent to HTTP/TCP on port 443 if the cert is valid and call it good.
We explicitly do not consider the same host on different ports equivalent authorities, even if they happen to be listening on both ports with the same cert. Why is TCP 443 vs. UDP 443 any different from TCP 443 vs. TCP 444? (4) seems like a security issue waiting to happen.
I'm going to reverse myself and disagree that (3) is undeployable on the web in the near-term. App-to-app handoff on many (most?) platforms now uses custom URI schemes. Apps that encounter unknown URI schemes ask the OS; the OS is able to invoke appropriately-registered apps or tell the user they need to get a capable app. E.g. launching "nonsense://" produces this on Win10:
Some cursory testing shows that browsers block navigations to URI schemes that don't have an OS-registered handler. But if you have two browsers, one QUIC-capable and one not, when you click an httpq:// link in the non-QUIC browser the OS will launch the QUIC-capable browser for you and you proceed on your merry way. This seems almost exactly what we'd want to have happen.
(2) probably is undeployable, because legacy apps will attempt to parse the URI and declare it invalid. They're semi-used to seeing unknown schemes (xboxliveapp-1297287741://, anyone?), but changes that break the parsers would be seriously painful.
What about extending the HTTP URI syntax in a way to give connection hints? For example (very likely not the right syntax):
https://www.example.com#svc=hq.443/
Where in the above example, the Origin (and thus host header and SNI) would come from the part before the "#". On failure to connect, a TCP connect (ie without the hint) could also be used. Cache entries would not include the part following the # as part of the cache key.
This is a variant of option 2, but with the difference that the Origin and object cache keys aren't impacted. This is only appropriate for secured connections (eg, TLS / HTTPS).
We can't reasonably change the semantics of any of the fields of the URI: we have to assume that any valid field is in use and that an invalid field would trigger rejection.
(e.g., your example syntax would turn into https://www.example.com
when passed to a client that didn't understand this. Or, from another angle, that syntax is already a valid HTTPS URI with clear semantics.)
There are a number of different scenarios and interactions that I think need to be considered in this discussion, I’ll try to capture the two main ones below. For precision and clarity I come from a CDN background and so will define some terms and concepts I’m using below – they may or may not perfectly match how other parts of the community use the same term hence why I’m going to give a brief description of how I’m using them here.
Client: The user agent which is presenting a URI in order to receive the content from the site that that URI is identifying. Site: A hostname grouping together a set of paths which identify some resources sourced from one or more origin servers, we can assume the resources identified are the same regardless of delivery protocol (http, https, quic). Origin Server: An authorative source for the resources within a site. If being fronted by delivery nodes then the origin may not be directly accessible by clients and may not deliver content over the same protocols as the delivery nodes are delivering to the client. Delivery Node: Client accessible servers capable of serving resources from one or more sites. In a world wide distributed CDN there could be hundreds or even thousands of delivery nodes with request routing being used to direct client to particular nodes. Different delivery nodes may be on different software versions or have different specializations. Request routing: the process by which a client requesting the URI gets connected to an appropriate delivery node capable of delivering the requested resource. There are a number of different ways that request routing can be implemented: DNS based Request Routing: the site hostname gets resolved into one or more IP addresses of the delivery nodes, dependant on where you are (and the state of the delivery nodes etc) the list of IP addresses may differ. HTTP 30x based request routing: The client initially connects to a request routing application, this returns a 302 redirect containing a URL pointing at a specific delivery node (by ip or hostname) and updated path. Resource based request routing: The results of an API call or contents of a resource provide a URL directing subsequent requests to a specific delivery node. For instance the ‘base-url’ element of a DASH manifest may contain one or more URLS to delivery nodes. Anycast based request routing: All delivery nodes share the same address and the network routes the connection to the closest node.
One particularly implication of this definition is whilst we could assume that all sites will be dual stack, this doesn’t imply that every delivery node for that site would be capable of being dual stack. This could be some nodes haven’t been upgraded to be dual stack, or some nodes may be specialised just to do HTTPs or just to do quic.
The key scenarios that we need to ensure we have a solution covering:
Every delivery node must support both https and quic
Much on the work on minimizing round trips at connection start are irrelevant if the client first has to make a TCP connection
The delivery node is placed under higher load, having to perform additional TCP session establishment and TLS negotiations on that temporary TCP connection. For repeat visits to a single server site the client can remember the QUIC support and potentially optimize the TCP connection away, for a multi-server site the client may get a different delivery node every time and so have to do the capability
You’ve outlined four potential ways that given just a URL the ability for quic to be used directly could be achieved:
Rely on alt-svc and the assumption all delivery nodes are dual stack. This limits flexibility and optimizations that could be done with a node only delivering quic packets.
Alt-svc in DNS. This seems a technically viable approach although I can’t judge what complexities there are in introducing this and allowing applications to be able to access the information. A human reading the URL also can’t tell what protocol it is for. For a browser same-origin policies may not be tripped when the client switches between protocols (which is probably a good thing)
Adding an indicator into the port number (:q443). As others have mentioned this is almost certain to break existing URL parsers and APIs which typically use an integer datatype.
Define httpq as an explicit protocol. This is clear what protocol the urls are for, although may have issues around same-origin semantics and may sometimes cause issues if the scheme isn’t defined on the os/programming language.
It may be one solution doesn’t fit all the usecases and the flexibility of multiple of them is what we end up needing:
Thomas
From: Mike Bishop [mailto:notifications@github.com] Sent: 28 March 2017 06:54 To: quicwg/base-drafts base-drafts@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: Re: [quicwg/base-drafts] HTTP/QUIC without Alt-Svc? (#253)
We've talked about four routes here:
Long live TCP. All HTTP sites will be dual-stack, and the authoritative endpoint will be TCP. Clients MUST have a way to find the secure delegation from that TCP endpoint to QUIC, though we might define alternatives to Alt-Svc headers which could be done without TCP. (Alt-Svc in DNS?)
Update RFC3986. RFC 3986 explicitly states that "The type of port designated by the port number (e.g., TCP, UDP, SCTP) is defined by the URI scheme." We could update the URI to contain a protocol designator, whose default is defined by the URI scheme. As it would be omitted in all existing URIs, their interpretation remains unchanged. Then https://www.example.com:q443/ refers to QUIC on UDP 443.
Define a new scheme. See pull request.
What's it matter? Assume that HTTP/QUIC on port 443 is likely equivalent to HTTP/TCP on port 443 if the cert is valid and call it good.
We explicitly do not consider the same host on different ports equivalent authorities, even if they happen to be listening on both ports with the same cert. Why is TCP 443 vs. UDP 443 any different from TCP 443 vs. TCP 444? (4) seems like a security issue waiting to happen.
I'm going to reverse myself and disagree that (3) is undeployable on the web in the near-term. App-to-app handoff on many (most?) platforms now uses custom URI schemes. Apps that encounter unknown URI schemes ask the OS; the OS is able to invoke appropriately-registered apps or tell the user they need to get a capable app. E.g. launching "nonsense://" produces this on Win10:
[image]https://cloud.githubusercontent.com/assets/4273797/24390519/18b217aa-134f-11e7-8c13-22a24805081b.png
Some cursory testing shows that browsers block navigations to URI schemes that don't have an OS-registered handler. But if you have two browsers, one QUIC-capable and one not, when you click an httpq:// link in the non-QUIC browser the OS will launch the QUIC-capable browser for you and you proceed on your merry way. This seems almost exactly what we'd want to have happen.
(2) probably is undeployable, because legacy apps will attempt to parse the URI and declare it invalid. They're semi-used to seeing unknown schemes (xboxliveapp-1297287741://, anyone?), but changes that break the parsers would be seriously painful.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/quicwg/base-drafts/issues/253#issuecomment-289670798, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AYUiYv1WvV7FWHoVg1SHdKHOUYV5Ekbfks5rqKCCgaJpZM4LzLA5.
Another viewpoint is that currently you cannot have a UDP load balancer on GCE and a TCP load balancer on the same IP. Because of this, we cannot set up QUIC for testing right now using GCE-provided load balancers.
Deferring this. Note that nothing prevents a client from being configured to speak QUIC only. Similarly, nothing prevents the definition of new means of learning about a QUIC-only server.
@martinthomson asked me to discuss here. There's no reason why clients shouldn't be able to just try QUIC with any server (a la Happy Eyeballs) without alt-svc.
By that logic, why can't you just try https:// for any http:// URL, or try https://host:443/ if connecting to https://host:8443/ fails? An origin specifies a scheme, protocol, and port; scheme by definition bundles a transport protocol. If you change any of those, you're talking to a different origin. Alt-Svc provides a mechanism for an origin to bless a different endpoint as being co-authoritative for that origin, and that's what H3 does currently. Lacking an Alt-Svc delegation, you're talking to a different origin.
Now, if we chose to go update the definition of the "http" and "https" schemes to specify either/both transport protocols, we could perhaps do so; while it stretches the spirit of RFC 3986, that doesn't exactly say that it can specify only one transport protocol.
And that’s why we should use UDP port 443 for HTTP/QUIC as the default (since QUIC uses portions of TLS to do the encryption, and so HTTP/QUIC is HTTPS over UDP, which is what UDP port 443 is for).
We had a similar discussion in TAPS and our drafts include some discussion on transport option gathering and racing (https://taps-api.github.io/drafts/draft-ietf-taps-impl.html). I like to call this "Happy Eyeballs on Steroids" and the major problem that arises from racing between the combinations of multiple IP versions (v4vs. v6), transports (TLS over TCP vs QUIC over UDP) and access networks/ PvDs (WiFi vs. Wired vs. LTE) is the state explosion you run into if you want to race for all of them. We solve this in TAPS by delaying the racing of less-preferred or less-likely candidates.
In my opinion, do a "Happy Eyeballs on Steroids" racing of HTTP/QUIC against HTTP/TLS is fine as long as the implementations make sure not to try all combinations on all connection attempts. This can be easily solved by a combination of caching results results (try QUIC in a race if have seen an Alt-Svc announcement or is has worked in the past), well-chosen timings and probabilistic choice of what to probe.
Using connection-establishment racing as a way to try out QUIC, while still falling back to TLS/TCP in a reasonable amount of time, is certainly a useful strategy and one that I'd like to promote for clients capable of techniques like Happy Eyeballs.
As @philsbln mentions, the key caveat here is that simultaneous racing is almost never the right answer. Ideally, you'd pick your preference (QUIC?) and then only start attempts for other endpoints/protocols/paths around the same time that the transport protocol would be doing its retransmission.
Even when you do have Alt-Svc telling you that an HTTP endpoint is HTTP/3 (and thus, QUIC) capable, being able to quickly fall back to HTTP/2 using this Happy Eyeballs approach is a benefit. This is to some degree a requirement to not fail on the <10% of networks that mistreat UDP traffic. So, racing is a technique that goes beyond discovery.
Using racing as a way to discover that an endpoint supports QUIC, as @ekr brings up, is certainly interesting. It's nicely generic, doesn't require changing lots of application protocols, and can allow clients to tune how optimistic they are about QUIC. You could imagine only trying QUIC first every so often, such that the user wouldn't notice the cases in which you'd need to wait the extra RTT before TLS/TCP is attempted in case QUIC is not supported. As QUIC becomes more widely deployed, it would make sense to try it more often. And, as a client learns which endpoints support QUIC, it can remember this to influence future decisions.
The main sticking point is knowing which port to use. Protocols other than HTTP that start being able to run over QUIC may have it in their best interests to define a well-known port to make this possible. The current HTTP spec that uses Alt-Svc is quite clear that the port may be anything. So, a client cannot know what port to try to optimistically race over until it connects over HTTP/2. Once it does get Alt-Svc, it can start racing connections after that.
However, if in deployment, we see a common pattern of HTTP/3 servers squatting on a certain UDP port (say, 443), then it may be in clients' best interests to try connecting to that port slightly ahead of an HTTP/2 attempt if they believe that QUIC is indeed preferable. That likely won't happen until we see pretty widespread deployment, but it is a reason to start solidifying some of the mechanisms for identifying or defining ports without needing a TCP connection.
I have concerns about how happy eyeballs plays out for HTTP connection coalescing. I think it can be specified in a way that resolves my concerns, and defining such guidance is crucial for making coalescing feasibly deployable.
How about making a solid DNS convention for alt-svc? If you use IP address directly, you probably also have port and protocol.
Fra: Lucas Pardue notifications@github.com Sendt: onsdag, januar 30, 2019 6:10 AM Til: quicwg/base-drafts Cc: Subscribed Emne: Re: [quicwg/base-drafts] HTTP/QUIC without Alt-Svc? (#253)
I have concerns about how happy eyeballs plays out for HTTP connection coalescing. I think it can be specified in a way that resolves my concerns, and defining such guidance is crucial for making coalescing feasibly deployable.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/quicwg/base-drafts/issues/253#issuecomment-458814047, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AALzN3FmTplyBmLtvkoYZf4EVQmE6EgRks5vISkpgaJpZM4LzLA5.
@mikkelfj do you mean the ALTSVC DNS record as proposed by Mike's I-D?
I have not studied Mike's I-D (but I should take a look). I'm not concerned about the details, just that DNS becomes the "normal" way to do resolve such things. SRV has had a history of being ignored in a chicken and egg fashion.
Also note that DNS could include supported versions.
Link to I-D for reference https://tools.ietf.org/html/draft-schwartz-httpbis-dns-alt-svc-02
Idea is that all information defined for HTTP/3 alt svc would appear in that record
@mikkelfj Putting the information whether QUIC is available in an ALTSVC DNS record does not eliminate the need for using Happy Eyeballs: If UDP to the announced port is blocked for the client, the client has to timeout otherwise. Still, having an ALTSVC DNS record would help to correctly tune the timings and probing probabilities for Happy Eyeballs.
@LPardue I see no problems with combining Happy Eyeballs and connection coalescing, but the relationship between them should really be defined somewhere. For me, connection coalescing opportunities are roughly the same thing as candidates for doing Happy Eyeballs, and should get as such at least 1 RTT advantage. So, in practice, try whether connection coalescing works, it it is slow or is behaving strangely, start making new connections with Happy Eyeballs.
I was thinking about a QUIC preload list, but putting Alt‑Svc into DNS works much better.
Also, it might be a good idea to put HSTS into DNS too, since the HSTS preload list won’t scale well, but that’s a separate proposal.
Having skimmed the ALTSVC DNS I-D, my only immediate concern is that it would probably be cumbersome for administrators to type ALTSVC into your typical DNS admin panel, whereas TXT would be easy, if the format is readable.
ALTSVC DNS record does not eliminate the need for using Happy Eyeballs
Maybe not, but it does eliminate the need to wait for an Alt-Svc header response where there might not be an endpoint listening on tcp port 80, or the client does not have a TCP stack.
Maybe ALTSVC is a bit misleading, if QUIC goes on to be a primary service rather than an alternative to something existing. I would be better if QUIC HTTP/3 could announce itself on neutral ground, although I agree that strong interop with current HTTP is important.
FWIW, quic-ah proposes to use the extension field of the ESNI DNS Record to designate the QUIC version the server supports.
Anyways, I think we agree that HTTP3 can be used if there is an arrangement, regardless of what the arrangement method is; e.g. alt-svc header, alt-svc record, ESNI record...
The question is if we should allow use of HTTP3 without prior arrangement.
I think we should allow that. There’s not always going to be DNS (consider DoH, which will benefit from HTTP3 not having HoLB), and being able to connect using HTTP3 from the first connection helps in certain deployments that we can be fairly certain that there is no firewall in between (e.g. server-to-server traffic).
The question is if we should allow use of HTTP3 without prior arrangement.
I think we should allow that.
yes
I see no problems with combining Happy Eyeballs and connection coalescing, but the relationship between them should really be defined somewhere
I definitely think this can be done and that there is benefit in shaking out an approach that can be codified somewhere. There are many spinning plates to manage, add to the mix ORIGIN frame and Additional Certs, even Alt-Svc in the load balancing sense (which has been broken in H2 but is likely to succeed H3). Getting this right seems like an arcane art, or luck, or both :)
Getting this right seems like an arcane art, or luck, or both :)
Well, if you can convince DDoS attackers to move elsewhere ...
FYI, I just opened https://github.com/httpwg/http-core/issues/194
Discussed in Tokyo; this needs to be resolved by the HTTP WG, and we need to incorporate their resolution into the doc.
While HTTP/QUIC doesn't formally require a client to implement Alt-Svc, there's no discovery mechanism other than Alt-Svc provided, so you're not going to get very far without it. That makes a full HTTP(/TLS)/TCP stack mandatory simply to open the connection in the first place. For various reasons (e.g. an embedded device in a controlled environment), a client might want to dispense with TCP altogether when it knows that the endpoint will support QUIC. It might also be useful in testing to be able to directly reference a QUIC endpoint.
Should we mint new scheme(s) that allows direct reference to a resource served exclusively over HTTP/QUIC?
(Note, in HTTP/2 the answer was "no," because HTTP/2 could be negotiated using the same TCP connection. QUIC doesn't have that luxury.)