httpwg / http-core

Core HTTP Specifications
https://httpwg.org/http-core/
466 stars 43 forks source link

QUIC and https:// #194

Closed martinthomson closed 5 years ago

martinthomson commented 5 years ago

Is it possible that an https:// URI might be used to directly initiate an HTTP/3 connection?

This is a tricky and nuanced question, that I think HTTP recognizes as a possibility in its current definition of the scheme, however, I would like this to be much clearer. The text implicitly only blesses TLS and TCP port 443 as a valid destination.

Ensuring that this is more direct might be hard without buttoning this down too tightly. It might be possible to offer examples of valid options that also includes QUIC.

tfpauly commented 5 years ago

Bringing over some context from the QUIC thread, clients may choose to employ a "happy eyeballs" strategy between HTTP/3 and HTTP/2 connection establishment. This may be necessary in some cases even when HTTP/3 is directly detected using Alt-Svc, since UDP may be blocked or unreliable on some networks.

Currently, HTTP/3 does not have any well-defined port. However, if a majority of server deployments end up using the same port, then clients could optimistically try to connect to that UDP port for https:// URIs as HTTP/3, and fall back to HTTP/2.

So, the point to consider: if we don't let https:// URIs officially be used to directly initiate HTTP/3, there may be little to stop clients from doing just that in a de-facto manner. With that in mind, we can either (a) define officially how to use https:// directly for HTTP/3 or (b) add something to make it impossible to circumvent Alt-Svc if we thought for some reason that direct connections would be harmful.

LPardue commented 5 years ago

Bringing over some context from the QUIC thread, clients may choose to employ a "happy eyeballs" strategy between HTTP/3 and HTTP/2 connection establishme

Devil is in the detail here. IMO It's more likely to be a TCP+TLS vs UDP happy eyeballs (combined with IPv4 and IPv6). Both will use ALPN to determine the actual protocol, assuming it gets that far. So by my reasoning, not a H2 vs H3 question but more a HTTP over TCP vs HTTP over UDP one.

tfpauly commented 5 years ago

@LPardue Indeed! Or to be even more specific, the fallback between the options needs to be considered at least to the transport handshake (TCP and QUIC/UDP), and optionally through the full security handshake (TLS/TCP and still QUIC/UDP).

My point is that if a client starts with an https:// URI, and tries to connect with QUIC to the port it expects on that host, and the connection works and the ALPN matches, they will likely just go ahead and use that.

tfpauly commented 5 years ago

It's interesting also to think about how we would interpret the port field of the URI if we sanction https:// being used for HTTP/3.

I can see this being reasonable:

https:// URIs may indicate HTTP/TLS/TCP on TCP port 443 or HTTP/QUIC/UDP on UDP port 443

But if we have a URI with a port specified, https://www.example.com:12345, it's odd to assume that port would be usable across TCP and UDP, and still odder to try to stuff two port numbers into the URI. You could have one or the other, and annotate the UDP port as something like u12345?

LPardue commented 5 years ago

Agree with all your points @tfpauly.

In considering some of these ideas previously, I try to think what an all-UDP future looks like, and how/if we could ever deprecate TCP. QUIC's reliance, to date, on TCP-based Alt-Svc is an unfortunate restriction

MikeBishop commented 5 years ago

@tfpauly, we've previously looked at annotating the port -- it would be an ideal solution for directly addressing an H3 endpoint. However, the production of URLs doesn't allow anything but digits there, and updating it would be challenging since there are many (MANY) URI parsers out there of varying strictness.

Fundamentally, the issue is that "https" currently defines the origin to be controlled by a host at that TCP port, and so we need some means of delegation to trust that a different endpoint is still the same origin. Different TCP ports are not the same origin, and different schemes on the same port are not the same origin. Saying that the same port on different transport protocols are the same origin feels... interesting. But at the very least, it's not what the definitions currently say.

kazuho commented 5 years ago

@tfpauly

But if we have a URI with a port specified, https://www.example.com:12345, it's odd to assume that port would be usable across TCP and UDP, and still odder to try to stuff two port numbers into the URI.

Maybe we can argue that it is possible to interpret :12345 to mean both TCP and UDP, the same way as www.example.com might resolve to v4 and v6 addresses.

To generalize, the issue of not being able to pinpoint the server's tuple already exists in HTTPS. This is because the server's identity is specified by a DNS name which can always be resolved to multiple addresses. So what the issue with applying the same rule for ports as well?

martinthomson commented 5 years ago

That is exactly what I plan to assume. The multiple address analogy is a nice one.

LPardue commented 5 years ago

To play devils advocate in line with where this is going, does Alt-svc start to become superfluous? We have robust mechanisms for version negotiation and application layer protocol negotiation , and happy-eyeballing UDP and TCP seems like it might be less complex (in some respects) to implement in the client than a fully compliant Alt-svc component.

martinthomson commented 5 years ago

We will depend on alt-svc for a good long time, I think. If the odds of successfully completing a QUIC connection are low, we're won't want to attempt it.

LPardue commented 5 years ago

To some extent yes. But in the same breath, one way to treat the presence of Alt-svc is just a flag to tell the client that the server thinks it is available using UDP, for some period of time. It comes with an overhead of the client needing to assert things are true, and to perform additional authority checks.

I think Alt-svc might need some help in extolling it's virtues, compared to simpler approaches.

igorlord commented 5 years ago

The multiple address analogy is a nice one.

@martinthomson, @kazuho,

I actually find the multiple address analogy pointing in the opposite direction. The only reason you are allowed to assume that multiple addresses (of the same or different IP version) name valid endpoints is because you have an explicit delegation by the owner of the domain name via DNS.

Your actions would be very questionable if you were to try constructing an IPv6 address from an IPv4 address yourself, just because you read a blog post that doing so works sometimes. Likewise, you should not try guessing additional IP addresses for random hostnames, just because you've seen it work for some other domains. I would think that the guessing IP addresses is wrong, even if you sometimes get a working certificate back. Such guesses are not supported by standards, reduce the ability to control network resource usage by the resource owners, and are undermining security by removing a need for an explicit delegation from the resource owner (an owner who might be very surprised by you trying different IP address, even if such emerging practice has been noted by numerous blog posts "out there").

There should be a standards-based method for delegation to IP addresses and to session protocols. And such methods should be wary of suddenly changing reasonable client-behavior assumptions that had been valid from the dawn of HTTP.

P.S. Imagine a sever that blacklists source addresses of all unexpected packets, which are all packets other than TCP 443 and ICMP.

kazuho commented 5 years ago

@igorlord I agree to the fact that server's would start seeing traffic on UDP 443. (for context, my point was in response to how a URL is to be interpreted, and I think that the argument still holds under that context).

There should be a standards-based method for delegation to IP addresses and to session protocols.

For UDP port 443, I think it wouldn't be unreasonable for clients to send HTTP/3 packets happy-eyeballing with TCP 443, considering the fact that the port is now registered for "https". OTOH, it might make sense to be more cautious about other port numbers, considering the fact that the UDP ports might be used for different purposes and QUIC packets might confuse the process that is bound to that port.

igorlord commented 5 years ago

For UDP port 443, I think it wouldn't be unreasonable for clients to send HTTP/3 packets happy-eyeballing with TCP 443, considering the fact that the port is now registered for "https".

Having UDP 443 port registered for a https is a necessary requirement for sending https scheme traffic to that port by default, but is it a sufficient requirement? What if the server is not expecting such new client behavior (and, say, classifies it as an attack, blacklisting the client)?

ExE-Boss commented 5 years ago

What if the server is not expecting such new client behavior (and, say, classifies it as an attack, blacklisting the client)?

That seriously shouldn’t happen, as the voulume of traffic necessary from a single client to constitute an attack far outpaces normal browsing behaviour.

In this case, it would only be the initial attempt at establishing a QUIC connection, which would go unanswered by said server, whereas the old TCP‑based connection would succeed, and no more QUIC connections would probably be attempted during this session.

igorlord commented 5 years ago

What if the server is not expecting such new client behavior (and, say, classifies it as an attack, blacklisting the client)?

That seriously shouldn’t happen, as the volume of traffic necessary from a single client to constitute an attack far outpaces normal browsing behaviour.

An attack using a zero-day vulnerability or an unpatched exploit requires negligible traffic volume. I can totally see a security device blocking a src address of any unexpected packet for a period of time. Blocking clients engaged in questionable activities is commonplace. The question is only how sensitive and specific the triggering action is. (For example: configuration of Reconnaissance Protection)

LPardue commented 5 years ago

For the sake of completeness, we ruled out using a new special case DNS name (.quic), right?

martinthomson commented 5 years ago

Alternative names would result in alternative origins, which would be terrible. And that doesn't even begin to cover the name resolution issues arising from special use names.

LPardue commented 5 years ago

So similar bucket of problems as alternative scheme but with more piranhas. Thanks for clear and concise answer!

igorlord commented 5 years ago

The only "clean" DNS thing I can think of is an "HTTP5" record to contain AltSvc-like delegation. An AltSvc via TXT record is likely more deployable.

kazuho commented 5 years ago

The issue with a DNS-based solution is that it only covers DNS. We sometimes use an IP address or some other adress resolution scheme (e.g. /etc/hosts) to specify to specify the server.

kazuho commented 5 years ago

@igorlord

I can totally see a security device blocking a src address of any unexpected packet for a period of time. Blocking clients engaged in questionable activities is commonplace.

While I agree that such blocking scheme is sometimes deployed, I am not sure if it should be considered as an argument against protocols evolving.

We have had devices that drop TCP packets that try to use Fast Open. It is true that such devices have hindered the deployment of TCP Fast Open. OTOH, the existence of such devices has (or IMO should have) TCP Fast Open defined and tried to be deployed.

It would be a good idea to raise the concern regarding security devices. However, as I stated, I prefer evolving even though there could be rough edges.

igorlord commented 5 years ago

@kazuho, I agree. I think experience with TFO is very instructive here, and we will do well to learn from its lessons:

vrubleg commented 5 years ago

I think that it is OK to require that HTTP/2 and HTTP/3 should work on the same port (TCP in the first case, UDP in the second case). It is 443 by default, or another port which is chosen in a URL. Otherwise it will be strange and misleading. Just imagine, when you make a request to https://example.com:888/, and want to look how the request looks like in the Wireshark, you will set a filter like tcp.port == 888 || udp.port == 888 because it is what you expect from such a request. It will be very surprising that actual communication can be done on some different port because it was specified somewhere in the Alt-Svc header in a previous request.

If HTTP/2 and HTTP/3 are always on the same port, the clients could try to connect to both TCP+TLS and UDP+QUIC simultaneously in the case when the browser doesn't know if the server supports QUIC. Seems like a perfect solution.

Additional idea. The fact that the client tries to establish a QUIC connection at the same time with general TCP+TLS could be reflected in the ClientHello message, for example. Probably, some unique ID could be provided to help the server to understand that both connections are from the same client. In this case if the server already have established the corresponding QUIC connection, it can close the TCP+TLS connection immediately without further TLS initialization to save some time.

royfielding commented 5 years ago

This is really a question of establishing authority for the service, rather than what port is being used. The actual port number doesn't matter; it's control/ownership over that port number that matters.

The port was essential for non-TLS http because we did have different server owners on different TCP ports and no certificate to verify authority. It was common for institutions (like UCI and MIT) to have institutional sites on port 80 and student sites on other ports. Virtual hosts largely replaced that model of deployment, but not entirely. Non-dedicated hosts typically run at least two HTTP services all the time (e.g., CUPS, etc.).

With TLS and https, the certificate provides authority verification for a domain but the port determines what certificate is sent (and also, iirc, the scope of cookies). It was much less likely, in the past, to have multiple TLS authorities per host, but I suspect that won't be true in an https-only world.

TCP and UDP ports are not jointly controlled, so merely claiming they are the same does nothing to avoid the issue of one person owning TCP:443 and someone else owning UDP:443 (perhaps without the main site even knowing it or h3 exists). In general, that means the https authority on TCP (port 443 by default, but possibly others) must authoritatively indicate when a UDP port is an alternative service, even if the certificate matches.

So, that is all square one.

The current deployments of gQUIC assume that there is only one authority for an entire domain (actually, many domains) and the machine owners have complete control over all ports, thus there is no reason to care about anything other than the certificate. I don't think we even require that the certificate talks about QUIC. Is that a reasonable assumption in general? No.

IMO, that means accessing https via QUIC implies some prior authoritative direction from the owner of that https TCP port, otherwise known as Alt-Svc. If that isn't desired, the way forward is to mint a new scheme (httpq, if you like) which has a different indicator of authority.

In any case, I don't see anything like a change request here for http semantics.

vrubleg commented 5 years ago

If that isn't desired, the way forward is to mint a new scheme (httpq, if you like) which has a different indicator of authority.

But we don't use a different scheme for IPv6. We still use https, and we don't provide ports for IPv4 and IPv6 separately. We assume that the server listens the same port on both IPv4 and IPv6. The same thing is applicable for UDP and TCP ports. If some TCP port is used for an HTTPS server, the same UDP port should be either reserved or listened for HTTP/3 connections by the same server. This requirement could be written in the standard.

vrubleg commented 5 years ago

As an option, we could also provide an ability to determine which protocol should be used by a client. There is already an example of such scheme, it is registered by IANA (coap, coap+tcp, and coap+ws). We could use something like https+tcp:// and https+udp:// (or https+quic://). But it definitely should be just an option for very rare cases.

igorlord commented 5 years ago

@vrubleg,

But we don't use a different scheme for IPv6.

This has been discussed higher in the thread. In short, with IPv4/IPv6 there is a clear delegation from the authority -- DNS A and AAAA records.

vrubleg commented 5 years ago

DNS server doesn't know which port and protocol (TCP or UDP) a client is going to use. It is just a sane assumption that the client can use any of the provided IP addresses independent of which protocol and port it is going to use. We assume that all the IP addresses should have the same ports opened and listened by the same servers in this case. But it is just an assumption, DNS server doesn't guarantee it.

An URL like https://example.com:888/ tells us that the port 888 should be used, but it doesn't tell us if it is TCP or UDP. https scheme currently is TCP only, but it can be extended to assume TCP and UDP (QUIC).

igorlord commented 5 years ago

DNS is an explicit signal from the domain name owner to treat all A and AAAA addresses as equally authoritative for the domain name.

https scheme currently is TCP

Exactly. There is NO standard delegation from https to UDP right now, except for Alt-Svc.

The scheme could be extended, and it would be an important step. There should also be an operational consideration in addition to / during the standardisation process. The reality of the deployed network is always an important consideration for a review of all IETF standards and for a subsequent implementation of them, or you risk a degraded user experience. So, I just want to repeat:

Imagine an operator who would like to deploy an experimental QUIC implementation, which may contain bugs / poor performance. Only users who "opt in" are offered an Alt-Svc delegation. If a browser would just "Happy Eyeball" QUIC, users who never opted in would also start connecting using QUIC (the server is not able to reject their 1-RTT handshake, since it would not know what user is connecting).

royfielding commented 5 years ago

I had a brief, jetlag-addled discussion with Martin about this before the first session in Prague. He pointed out (IIRC, in my post-dream somewhat more coherent 6am state) that the determination of authority for TLS is already assuming an attacker can alter the streams so that both IP and TCP address/port assignments cannot be relied on for authority, so we cannot be relying on port ownership to determine authority.

So, after some sleep, I think that means what we are relying on in https is that the certificate issuer verified that, at some point in the past, the certificate owner had control over the https service for the specified domain (specifically the service at TCP:443, right? Not a specific port?) and after that point (until expiration) the ability to decrypt communication encrypted using that certificate is the only test for authority that matters. IOW, we are relying on that ability to decrypt to carry forward our notion of authority rather than any further association or path or port of TCP/IP.

I am not sure where that leaves self-signed certificates, but I am pretty sure that doesn't matter.

Ultimately, we already need to update RFC2818 (or import the rest of it into Semantics) and properly explain how the existing authority mechanism really works so that we can then properly define that https means "HTTP secured by encrypted channel to anywhere that can be decrypted by the authority that at some point controlled https/TCP:443". We have to do that because it is necessary to properly define what https is relying on in terms of establishing authority.

QUIC (using TLS) will likewise assume that the IP/TCP/port of the current exchange is meaningless for establishing authority because the recipient can only decrypt the exchange if it has the private key owned by an authority that once had control over https/TCP:443 according to whatever presumably trusted CA signed the certificate that is being used in that exchange, and that's good enough because the certificate matches the https URL's authority (regardless of transfer protocol or port).

And I can actually write that in Semantics without losing my lunch or being attacked by hordes of TLSWG attendees?

Or am I still confused?

royfielding commented 5 years ago

Er, too much simplified.

What we are relying on in https is that there exists some chain of trust (possibly configured by exception) that verifies the certificate provided when communicating with the destination port is a certificate controlled by the domain owner identified by the authority component in the https URL. We are still stuck with possibly different certificates being provided by different (trans)ports.

Hence, the authority is the domain only, the port impacts which certificate is chosen assuming that the certificate matches that domain, and the client trusts that match according to some other RFC we can link to (I hope). Once verified and until expiration, the ability to decrypt communication encrypted using that certificate is the only test for authority that matters. IOW, we are relying on that ability to decrypt to carry forward our notion of authority for that specific connection because some chain of trust says the certificate matches the https URL authority, and possession of the private key is sufficient proof to make it "secure".

We need to update RFC2818 (or import the rest of it into Semantics) and properly define that https means "HTTP secured by encrypted connection using a certificate selected by that connection that is trusted as belonging to the authority identified in the https URL".

QUIC (using TLS) will likewise assume that the IP/TCP/port of the current exchange is meaningful for selecting which certificate to use, but only the domain matters for authority. This implies that we still need a formal convention that HTTP services provided via UDP port N must not differ semantically from HTTP services provided via TCP port N. We don't need to worry about them being controlled by different owners since that would imply one of them won't be able to decrypt.

igorlord commented 5 years ago

@royfielding, I think what you are saying is that as long as the client receives a certificate covering the domain name it has been looking for, it is ok to consider this endpoint "secure". That's true, but is the client authorized to access the endpoint it has just accessed?

Note that absent from the statement above is anything linking the IP address you accessed with the domain name -- are you authorized to access (or even try to access) any random IP address on the internet? Or, are you authorized to access TCP 22 for www.example.com in any way via any protocol? It cannot be that just because some machine/port on the internet can be accessed that anyone is authorized to access it.

There needs to be some authorization from the domain name's owner. So the key question for this conversation is whether publishing a DNS resolution for a name is an authorization to access the endpoint's UDP port.

royfielding commented 5 years ago

I don't think there is such a thing as being authorized to access a port. What happens now is that a URI reference is received that leads a client to believe it can access something there. However, anyone could have minted that URI, so it isn't the case that having a URI implies it was provided by the owner.

royfielding commented 5 years ago

IOW, no one is authorized to access a port, but access may be denied by the service for any reason. We rely on it being "socially acceptable" for random clients on the Internet to attempt access to TCP ports 80/443 and hosts are expected to manage such access. What we would be adding (eventually) is another socially acceptable port for random clients to access and hosts are expected to manage such access (even if that simply means blocking access by default).

igorlord commented 5 years ago

I do not think "socially acceptable" is the guidance here. The guidance is the reasonable expectation of server operators. There are standards documents that explain what reasonable expectations should be, and deployments try to take into account what the actual expectations of server operators are.

I am not sure what "anyone can mint a URI" is trying to clarify. Anyone can mint a URI like "http://www.example.com:22", so what? Anyone can trick someone else into doing something inappropriate. Does it mean that it is ok to do something inappropriate?

martinthomson commented 5 years ago

I think that there are both client and server considerations here, but the ones we are most concerned about are the decision a client makes.

A client determines whether a given server is authoritative for a URL based on the ability of the server to use the private key associated with a certificate that the client considers to be trustworthy. If the server presents a certificate along with proof that it controls the key, then the client will accept the authority of the server.

In HTTP/1.1 and earlier, the only URLs for which the client will assign authority are those that contain a domain name matching the name it opened the connection for. The server_name extension in TLS carries this name, if it is present.

In HTTP/2, the client will assign authority to all names that are present in the certificate. However, a client will only do that if it concludes that it could open a connection to the same server for that URL. In practice, that means that the client will make a DNS query and see that it contains the same server IP address. RFC 8336 (ORIGIN) removes this restriction if the server sends an ORIGIN frame.

The role of the server is still important in this. A server has to be willing to serve the request that it receives. This is relevant for several reasons. In the case that a network attacker causes connections for port N to be received at port Q, checking the Host header field is necessary to ensure that the attacker can't cause https://example.com:N/foo to be replaced by https://example.com:Q/foo.

This also extends to names. Though a server might be authoritative for a name, it might be unwilling to serve content. This is what we saw when several providers disabled "domain fronting" a practice where clients would connect to "example.com" and then send queries for resources on "censored.example". Amazon and Google demonstrated that they were unwilling to accept the costs of this practice because those included having their entire service being blocked in some jurisdictions.

Looking at the definition of Host in the latest drafts, I don't see any text requiring that the server check that it is willing to serve for that authority (including port). Maybe that's worth a separate issue.

MikeBishop commented 5 years ago

Echoing Martin's point, I think the most compelling comment I heard in Prague on this issue is that it decomposes into two different issues. Fundamentally, we're discussing the difference between using URLs and URIs.

HTTP is capable of transporting a full URI which might not reflect the host/port of the underlying connection. We already leverage this in 8164 by sending http-schemed requests to a potentially-different port using TLS, which would be directly addressed by a different scheme. The server is expected to know how to parse the requested URI independently of the port on which it arrived; HTTP/1.1 uses (extensively) request forms in which pieces of the URI are implied by the properties of the port over which the request was received, but HTTP/2+ removes this behavior.

(Note that 8164 requires that clients validate that servers claim the ability to do this distinct processing before leaning on it too heavily. We might follow this precedent, or we might simply make this mandatory for HTTP/3 implementations.)

So a server receiving a client's request for an https-schemed (or even http-schemed) resource via HTTP/3 has to decide whether it possesses that particular resource and respond appropriately. It doesn't need to care how the request came to arrive over that particular connection.

The client, on the other hand, has to be able to take a URI and come up with candidate ports/protocols over which it hopes to connect to a server which is able to provide the identified resource. Initially, this was a set consisting of all the IP addresses returned by DNS, each paired with the port and protocol indicated by the URI. Alt-Svc permits adding additional elements to this set to be attempted, but conditions the "success" criteria for a connection as discovering a trusted certificate. We're discussing adding additional ports over which a client can attempt a connection, keeping that same condition.

mnot commented 5 years ago

So, remembering that this is a core issue, what changes to we want to see in the core document? Scanning the above, I see proposals to:

Those both seem like separate issues from this one, and note that we also have #143 about referencing / updating for alternative services already. If that's the case, can we open those and close this issue?

awwright commented 5 years ago

I've found that Web browser vendors are somewhat hostile to requesting any additional DNS records because it might add a few milliseconds of time to requests. e.g. https://bugs.chromium.org/p/chromium/issues/detail?id=22423

Isn't the proper solution a new URI scheme? web://authority/ — where connection parameters are specified in TLSA, SRV, and similar records.

awwright commented 5 years ago

But we don't use a different scheme for IPv6. We still use https, and we don't provide ports for IPv4 and IPv6 separately.

While the scheme identifies the protocol being used (e.g. http: means HTTP and https: means TLS), it does not identify the transport mechanism. The IP address and even version may change because transport parameters are specified in the authority component (through DNS records, if a hostname is used).

A new URI scheme could define such a method of negotiating the protocol and security parameters (the same way the transport is), that is forward compatible with any current or future protocol implementing HTTP semantics.

Then I imagine Alt-svc (or a similar feature) could advertise a preference to use this new scheme, over other ones, for clients to switch over to.

vrubleg commented 5 years ago

A new URI scheme is not a good solution because it will delay adoption of QUIC for years, and it will be painful. Just imagine how many existing software won't accept or recognize such links.

A new HTTP-specific DNS record could be a good solution because it can also advertise that a website supports HTTPS and it is preferred, and when a browser opens an http:// link, it could upgrade it to https:// automatically in this case, without unencrypted requests to the website at all. So, this new DNS record could specify that a website supports HTTPS (and it is preferred as an automatic upgrade), and also provide information, that HTTP/3 (over QUIC) connection is available.

awwright commented 5 years ago

@vrubleg Is the assumption that servers will never implement QUIC by itself, and always provide an HTTP fallback?

vrubleg commented 5 years ago

When I tell "existing software won't accept or recognize such links", I mean software like message boards, IM, all the places where you could post a link. Also software like web crawlers will have issues with new links. A software which works with web won't be able to recognize new links until this support is added by its authors themselves.

Just in case, https is here for ages, but I still encounter some old message boards which don't recognize https:// links. Just imagine how much inconvenience like this another HTTP scheme will introduce. And it is still HTTP, just another version of it.

According to HTTP servers... Yes, I believe it is wise to always provide fallback to HTTP/2 and to HTTP/1 if it is required. Some lightweight clients could decide to use just HTTP/1, for example, if they just want to download a file (wget, curl), and they don't need all the nice features of HTTP/3.

awwright commented 5 years ago

There is a sense in which QUIC is HTTP, in that it is semantically compatible. But there is an important sense in which it is not HTTP, because the http: and https: schemes have no mechanism to specify connecting using a different protocol; it is an entirely separate protocol.

I ask the question because it seems to the answer question posed by the issue:

If you implement QUIC alongside an HTTP server (e.g. HTTP over TLS); then alt-svc (or a similar mechanism) will suffice. In this case, HTTP over TLS remains the authoritative protocol for that URI space; and the server is merely offering clients an alternate protocol to use if they so wish.

If the answer is no, QUIC will be implemented by itself, then there is no issue with implementing a new URI scheme. Clients won't understand the QUIC server, so the fact they don't understand the URI scheme doesn't change things.

awwright commented 5 years ago

@ExE-Boss A thumbs down so is helplessly opaque, help me out here

ExE-Boss commented 5 years ago

The thing is, http:// and https:// only specifies the Application layer protocol and whether the connection is encrypted, which QUIC is.

TCP+TLS and QUIC are Transport layer protocols, which sit below the Application layer, and so they’re unaffected by the scheme.

awwright commented 5 years ago

@ExE-Boss Following that definition, the course of action seems to be have a QUIC record type in DNS that's functionally the same as A and AAAA. If you get an A record back, you connect TLS over IPv4; AAAA ditto IPv6; and if you get a QUIC record, you connect over well, QUIC. And in theory you could have a "tri-stack server" (the same way we have dual-stack IPv4 and IPv6).

Does this sound right? It seems weird to me because I recall AAAA records taking a LONG time to take off, I don't know if they'd be adopted by user agents (see my above link), and it doesn't actually seem to me QUIC is really the same thing as a transport like TCP (or TLS, for that matter, which is essentially TCP with privacy and authentication).

ExE-Boss commented 5 years ago

The thing about AAAA records taking so long is that IPv6 required Tier 1 networks to upgrade their internet infrastructure so that their routers could understand IPv6 packets.

Deploying new or upgrading existing Transport or Application layer protocols is a lot easier as they don’t depend on Tier 1 networks upgrading their infrastructure.

awwright commented 5 years ago

@ExE-Boss No, I mean actual Web browsers not bothering to look up AAAA records, even where IPv6 was supported by the user's ISP and the origin server. I may be mistaken, I'm not sure how much of this was the operating system, and how much was the user agent.