Closed jcjones closed 4 years ago
Thanks for raising this, I think this will help with https://blog.mozilla.org/security/2018/01/15/secure-contexts-everywhere/ in general and in particular with easing the pain on developers as testing locally would become a little easier and more secure (running the browser with certificate checking disabled or some such isn't great).
So to be clear, we already treat localhost as secure context. What we don't do is force localhost to resolve to a loopback address.
So if I stick "some-random-ip localhost" in my /etc/hosts and load http://localhost we will treat it as a secure context even though the data comes from some-random-ip.
For normal developers using localhost everything works. For people who map localhost to somewhere non-loopback, we end up (arguably incorrectly) treating it as a secure context, which is arguably problematic. But that's the real issue involved here, not what the summary of this issue says.
From my perspective, the primary issue is that treating localhost as secure hasn't gained consensus and been implemented by all. Forcing localhost to resolve to a loopback address removes an argument against, so should help get this deployed by other browsers. We need that so that application developers who use a browser to admin software installed on the local machine can be trained to stop injecting certs into the OS/browser root store and shipping the private keys with their software so they can use HTTPS without errors.
Adding @ddragana as well.
I agree that:
There are various ways out, such as this proposal, or maybe something like doing the special local treatment only for 127.0.0.1
and not for localhost
. This one seems nicer for users, though.
Is it practical? (And practical enough that we'd prioritize it?)
We have to be very careful about this. The point of [SecureContext]
is to gate features. To that end, what we are doing today is ok.
The special carve-out in secure contexts is ok, good even. I am also comfortable to some extent with the /etc/hosts 'bug' as a loophole in that, but agree with the spec that fixing the mapping is the right outcome. Having "localhost" mean 127.0.0.1 and ::1 matches expectations.
The problem happens when we start to treat localhost as being authenticated. Just because the server is running on the same machine, that doesn't mean that we can trust it more, or that we should persist information in the same way we would for a server with a stable identity. http://localhost is not an authenticated origin, and we can't treat it as one.
Giving license to treat localhost origins as being equivalent to HTTPS origins is dangerous. We can't assume that access to TCP ports is consistent over time. We should treat other applications on the same machine with appropriate caution. That means insisting that they authenticate.
You might ask how, which is a longer discussion, and I am writing this on my phone.
So I am going to say that this is good, right up to the point where it says...
If application software wishes to make security decisions based upon the assumption that localhost names resolve to loopback addresses (e.g. if it wishes to ensure that a context meets the requirements laid out in [SECURE-CONTEXTS]), then it MUST directly translate localhost names to a loopback address, and MUST NOT rely upon name resolution APIs to do so.
Not because the example given is ok, but because it implies a generalization of the concept that is dangerous.
To be clear, I think that the implications of this spec as implemented are fine, and we might be able to contain any negative consequences. However, the spec implies - strongly - a security posture that could be harmful if we don't remain sufficiently aware of the limitations.
(I realize that this is feedback that I should have given Mike a long time ago, but it only just came to me. Up until now I was mostly ok with the spec.)
p.s., The most recent spec is https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02 and you can erase the "-02" for the evergreen reference.
Fundamentally, http://localhost/foo is about like file://foo in terms of whether you can trust it, right? Both are "secure context" in the [SecureContext] sense, but I agree that actual security decisions should not be made based on this...
Arguably, https://localhost/foo is less concrete than file:/foo in that the ownership relationship with the file is more concrete. That's the key realization I had.
I agree, both meet the bar for [SecureContext]
, but they might not for something stronger. I think that the let-localhost-be-localhost spec and secure contexts have this mostly right, but the overwhelming impression they give is that http://localhost is somehow as good as https://*. I think that were this to be addressed - which should be easy - this is a fine thing. Without, I'd be inclined to put a harmful
label on, which would be unfair.
To that point, we do use [SecureContext]
plus user consent for a number of persisted decisions, e.g., persistent storage. To me what you're saying almost reads like we'd have to specially consider localhost
whenever we mark something [SecureContext]
, which wouldn't be great (and goes counter to the reason for having it be [SecureContext]
to begin with).
On the point of not being able to trust loopback: How much should we be interested in protecting against a local attacker on the users machine? Wouldn't the risk of local software the user can't trust compromise the user-agent anyway?
If we decide to not prioritise this should we reverse the decision to treat localhost to be a SecureContext
or will that be a bigger web compat issue?.
I also think we should align the code that decides what a trustworthy loopback address is for the Mixed Content blocker, Upgrade Insecure and the URL bar code.
Worst case, treat localhost as SecureContext if devtools is open? However, that might it even more difficult to develop and test against Firefox (e.g., in CI environments), increasing web compat pain.
How much should we be interested in protecting against a local attacker on the users machine?
Some computing platforms (iOS and Android in particular) assume mutual suspicion between applications running on the same machine. We should too.
I would suggest that the answer we're looking for here is to not persist data for http://, but that might be unpleasant. I have other ideas that I think might address the meta-point better for local development.
The other uses of the [SecureContext]
logic (like the mixed content blocker and other things), should stick to https://.
Wouldn't the risk of local software the user can't trust compromise the user-agent anyway?
Note that on a multiuser system you can have an attacker who just uses ssh port forwarding to forward localhost:80 to an arbitrary location of their choice. This does not involve any untrusted local software.
@martinthomson but that basically means a parallel "secure context" system which is "are we all https://" since presumably if http://localhost embeds https://some.example the latter shouldn't be granted those permissions either for the same reason we deny them for secure contexts? Or is it enough of an edge case somehow? (Of course with third-party isolation it'll have to jump through a lot more hoops anyway.)
Mixed Content Blocker, Upgrade Insecure and the URL bar padlock aren't related to SecureContexts
currently. I personally think they should all be aligned, perhaps this is the lightweight SecureContext
we are looking for?
Some computing platforms (iOS and Android in particular) assume mutual suspicion between applications running on the same machine. We should too.
Could we make this decision and tighten SecureContexts
later perhaps?
Worst case, treat localhost as SecureContext if devtools is open? However, that might it even more difficult to develop and test against Firefox (e.g., in CI environments), increasing web compat pain.
I hope we actually can standardise something WebDriver can configure here actually. An allow list of SecureContexts
domains along with the inverse to negate SecureContexts
too.
Wouldn't the risk of local software the user can't trust compromise the user-agent anyway?
Note that on a multiuser system you can have an attacker who just uses ssh port forwarding to forward localhost:80 to an arbitrary location of their choice. This does not involve any untrusted local software.
Agreed, but this is also a local attacker. We rarely design for this either.
@annevk, not entirely sure where you are going with this, but the embedding example is interesting. If an unsecured localhost embeds an https origin, then we might make capabilities available if we consider localhost to have those capabilities.
I suspect that the problem here is that we have several different things we'd hoped to pin on this secure contexts thing:
capabilities - the reason we withhold these is largely only principled. Making an API available is something I think we should be prepared to delegate to localhost in the way that is currently envisaged in the secure contexts spec. @jonathanKingston's mention of upgrade insecure worth thinking about here. I see upgrade insecure as a capability we provide the site, so it would fit into the first class.
persistence - this relates to the state we maintain. These are more dangerous because they rely on continuity of identity. I think that our ideal end state effectively grants a new principal to every http:// load, including localhost. But that would break lots of things if we applied it to things like cookies. But we could withhold access to new APIs with persistence aspects.
indicia - as @jonathanKingston observes, we need to make decisions about whether we show a padlock. Here, this relates to the browser's security posture toward the site. To that end, I don't see us engaging mixed content blocking.
That complicates things. Obviously it would be nice to have a simple hook on which we can pin a bunch of stuff. For me, that's still https://. That is why I'm a big fan of trying to make https:// work for localhost. I also want to enable development on private networks, where this doesn't help at all. To @marcoscaceres' comment about devtools, I would be OK with having some method to enable capabilities for any site in devtools provided it wasn't the default.
Now, this is a great discussion, but it sounds like we're discussing Secure Contexts, not this particular spec. Not surprising. Is anyone uncomfortable deferring this until we've had a discussion with Mike about these concerns?
@martinthomson -- I'm not sure I follow the suggestion to defer. I understand the issues you enumerate are complex, but they seem only somewhat related to the question about whether we support DNS libraries hardcoding localhost
to always resolve to a loopback address (which appears to the the question on the table). I think we should be able to select either non-harmful
or important
, depending on how concerned we are about the attacks that can arise from other bindings. I'm largely of the opinion that the current situation is very difficult to exploit, and so would vote for non-harmful
, but can understand if a stronger position is desired.
The remaining issues raised in this thread are definitely worth exploring, but they don't seem to bear on our standards position, at least not on this specific topic.
@dveditz -- do you have any thoughts here?
Good point Adam. I don't think that this needs to be important
, but non-harmful
or even worth-prototyping
would be fine with me.
Concretely, I propose non-harmful
, with a detail of:
The proposal, to the extent it applies to browsers, is to hardcode
localhost
to always resolve to a loopback address instead of invoking the resolver library to perform such translation. While the problem being addressed is difficult to exploit and does not appear to be at use in the wild, the solution is benign and does what it sets out to do.
I'd like to hear from at least @bzbarsky, @dveditz, and @martinthomson on this proposal before I push it out.
WFM
I am enthusiastically in support of the browser mapping literal "localhost" to a loopback address. mapping "*.localhost" is probably non-harmful
but I'd want to be more careful about breaking things before implementing it.
OK, I've paged this in now.
Given what I said in https://github.com/mozilla/standards-positions/issues/121#issuecomment-446697954, where we already treat localhost as [SecureContext], even without implementing the hardcoding this issue is about, I feel that doing the harcoding may be important in at least implementation terms. In terms of the spec, I think worth prototyping
is the right thing, probably.
I don't know how we currently treat "*.localhost" for secure contexts purposes and am not sure what the standards position on that should be...
For reference: Section 6.3 of RFC 6761 defines *.localhost
as resolving to the loopback address, so we would have standards cover for a change there.
Of course, the relevant question would be "what does this break?" And I don't have any way to answer that short of shipping a test to lots of people to see if we ever get anything other than 127.0.0.1 or ::1.
@bzbarsky I think you omitted a word.
@martinthomson per https://chromium-review.googlesource.com/c/chromium/src/+/598068/ Chrome has been doing this for over two years. I would expect Firefox to be fine.
(If we wanted to standardize it for web user agents we could maybe put it in Fetch given that the draft in OP has stalled, but maybe it's not needed.)
@bzbarsky I think you omitted a word.
Hmm. I don't know what happened there. Fixed.
With the new input, it looks like we're leaning towards worth prototyping
. I propose detail of:
The proposal, to the extent it applies to browsers, is to hardcode
localhost
to always resolve to a loopback address instead of invoking the resolver library to perform such translation. Since browsers (including Firefox) treat files hosted onlocalhost
to be more privileged than remote content, this proposal seems to be a good belt-and-suspenders approach to prevent certain exploits.
Request for Mozilla Position on an Emerging Web Specification
Other information
Post-mcmanus, this might need a re-ping to drive, and might be good to take a stance on.