stellar / stellar-protocol

Developer discussion about possible changes to the protocol.
529 stars 306 forks source link

SEP-0024: Requires "https" but does not specify trust anchors #853

Closed jbash closed 3 years ago

jbash commented 3 years ago

What version are you using?

1.2.2 of 2021-01-19 at commit fbac049 , master branch as of 2021-01-28.

What did you do?

Read the document

What did you expect to see?

A complete and unambiguous description of which trust anchors are acceptable for authenticating TLS endpoints for retrieving federation information and similar metadata (and, equivalently, of which trust anchors I as a server operator must get to sign my keys).

Something of the form: TLS ("HTTPS") clients MUST authenticate servers' DNS names. When DNSSEC-protected TLSA records are published in the public DNS name space, clients MUST honor those records as described in RFCs 6689, 7671, and 7673. Clients MUST NOT use alternate DNS roots for this purpose. If no TLSA record is available for a name, the client MUST verify that presented X.509 certificate matches its claimed domain name, and that the presented certificate is issued by an authority which follows CA-Browser forum requirements and is listed in the trusted root CA list published by mozilla.org. Clients MUST NOT require servers presenting DNSSEC-validated TLSA records to use CAs in the Mozilla list.

I might also have expected to see guidance on using and authenticating things like Tor hidden services, data files hosted in IPFS, and the like.

What did you see instead?

"Wallets and anchors should refuse to interact with any insecure HTTP endpoints."

leighmcculloch commented 3 years ago

The use of HTTPS, certificate authority verification, and its surrounding technologies are best practices that make up one of many parts of building secure products for on line consumption or that consume online services. The SEP is focused on interoperability and for the most part not defining the exact certificate authorities does not impact interoperability because many applications use system certificate authorities and do not manage a list themselves, in a similar way to other applications on a device. I don't think the details of these things are particularly in the domain of SEP-24 to define. If you disagree could you elaborate and provide some examples?

The comments regarding Tor and IPFS is interesting. Could you elaborate what you'd like to see in SEP-24 for these use cases?

cc @tomerweller @JakeUrban

jbash commented 3 years ago

Certificate authorities do in fact affect interoperability, because if the server chooses to get a certificate from an authority that the client doesn't trust, then the two will not interoperate. Since the goal here is presumably to have public services that may get connections from any client, there has to be some specification of which certificate authorities are to be trusted. The same applies to things like protocol versions and accepted cipher suites.

There's a misconception that the trust root list is a local decision on the client side, but the fact is that the only reason encrypted HTTP works most of the time is global coordination: the browsers coordinate to use essentially the same CA list, the OSes pick up that same list, and server operators know to use CAs from that list. If you actually make any significant local changes to your trust list, you quickly find that things break all over the place, and you're forced to either accept such failures, or trust CAs you'd rather not be trusting. Almost everybody ends up just trusting the standard list, maybe with the addition of a local trust root (which can have its own bad effects and which a system like Stellar would probably like to discourage). An instruction to use the Mozilla list is essentially an instruction to follow standard practice.

By the way, unless you're a giant well-known Web site with a large staff doing heavy monitoring, you actually get pretty limited assurance out of the "consensus" CA system, because there are so many CAs that it's usually possible to subvert one of them if you try hard enough. That's one reason for the suggestion to require DNSSEC and DANE.

Beyond the question of mismatched expectations about which CAs to trust, once you get outside of the browser, it's really common for application developers to get the whole thing wrong and do things like accepting self-signed certificates. Using vague phrases like "insecure HTTP endpoints" more or less invites them to make mistakes like that. Let's just say that last time I worked for a major vendor that you'd probably hold up as an example of the kind of organization that wouldn't make that mistake, I had to write standards like what I propose (actually much more detailed)... and then we still had to come down on many developers who ignored those standards, and used the wrong trust roots or no trust roots at all. A lot of programmers just can't get their heads around the idea that an encrypted connection can't protect you if you don't authenticate who's on the other end of it.

I don't know what "the domain of SEP-24" is, but I do know that if you don't give specific guidance on these issues, you'll surely have a lot of applications out there promiscously accepting any unauthenticated peer, and then claiming that the connection is "secure" because it's encrypted. Especially any application that's not embedded in a browser. If it's important enough for you to address the question of encryption at all, then I think it's important enough for you to say what it means to get it right.

As for Tor and IPFS, the basic idea is that you'd like to get the features of those systems, like independence from the centralized control of the DNS name space, automatically distributed data storage, anonymity, or whatever. But if I list "abcdef1234567890abcdef1234567890.onion" as my home server, approximately no Stellar-related software will be able to connect to it, and most applications will probably choke with very confusing error messages. As for IPFS, I don't know if it's even possible to express "you should look for my stellar.toml file at ipfs://blahblahblahblash", since it looks like some fields are just host names. Even for fields that are URLS, there's no guarantee that any given client implements any given URL scheme.

A client could add support for Tor or IPFS, and a server could at least publish a ".onion" host name... but neither of them can do that unilaterally and expect to actually interoperate. There's no point in publishing the name if I know that no clients will be able to use it, and there's no point in making my client support a given type of name if I don't know that any servers will use it. So there's effectively no way to get to the point where any user or developer can rely on things like that working.

There's not even any obvious Schelling point; I might put a ".onion" name in my data, while the client developer puts in a bunch of work to support ".eth" or ".i2p" instead. For that matter, I might use a ".eth" name that points into IPFS, whereas the client developer might only support ".eth" names that point back to "normal" Web servers. And what about SAFE or Zeronet or Freenet or GNUnet or whatever? People who make different choices won't interoperate.

The only way to solve a problem like that is to have whoever sets the protocol standards give some kind of guidance about what should be supported now, and about where things are expected to go in the future. That's a very big task. It usually takes years to figure out what you want people to support, write the standards for how to support it, somehow cajole people to add the support to their software, and then actually get people to use it. Even if there's a well-defined standard, there's no advantage for anybody to be the first mover, because that person will have nobody else to interoperate with. I've dealt with writers of basic infrastructure software saying "We're not going to implement X because we don't see any adoption of it"... when it would be effectively impossible for anybody to adopt X until those very people's software supported it.

So if you ever want "decentralized Internet" stuff to work, you have to start early, both so you can get it into more "ground floor" implementations and so you have time for it to percolate into use before the need becomes desperate. Look at the example of the IETF: a lot of bad things got frozen in stone when Internet adoption exploded in the 1990s. As an example, even with everybody from the US Government to Google beating the drum, and a serious address shortage obvious looming, it's taken almost 30 years to get any real adoption of IPv6. There were other things that people would have liked to change, and that would truly have improved the Internet, that just plain never happened. The relationship of IPv4 to IPv6 in the Internet isn't that different from the relationship of, say, HTTP to IPFS as ways of distributing Stellar metadata.

github-actions[bot] commented 3 years ago

This issue is stale because it has been open for 30 days with no activity. It will be closed in 30 days unless the stale label is removed, or a comment is posted.

jbash commented 3 years ago

Auto-closing issues isn't very confidence-inspiring for a financial application...

github-actions[bot] commented 3 years ago

This issue is stale because it has been open for 30 days with no activity. It will be closed in 30 days unless the stale label is removed.