Open martinthomson opened 2 years ago
(i.e. we only end up fetching the file from .well-known and the config URL)
Okay, so the attacker-supplied URL is always fetched?
Okay, so the attacker-supplied URL is always fetched?
Yes.
Both the attacker-supplied URL (the configURL) and the .well-known
file are always fetched.
Thanks Sam!
As alluded to above I can see allowing "no-cors
" fetches to .well-known
URLs, although there too I think there's a big risk in that as we do more of that we might forget about the security lessons we learned elsewhere. E.g., if those fetches get attacker-controlled headers, do we ensure the header values cannot be used for attacks or will we forget? As such, I would say that needs a much wider discussion and perhaps it should have dedicated infrastructure in Fetch so it at least has a better chance of getting considered going forward.
Now allowing fetches to arbitrary end points has those risks, but amplified, as the endpoint is even less aware as to what might happen.
Those are the risks with respect to requests. With respect to responses there might also be risks, depending on the implementation strategy, and what we end up exposing over time through various side channels. (E.g., the Web Performance WG always seems keen to expose new numbers to websites and at least historically has not always respected the same-origin policy.)
From all that sticking with the CORS sandbox seems safer.
But that introduces the problem with the accounts fetch. Which is credentialed (cross-site somehow? and I guess that's okay because it's for login) and thus needs protection from random websites fetching it. One idea I had is that we use some kind of user agent origin, e.g., about:browser
. Websites cannot spoof that. We'd reserve that origin for cases such as this as generally using an opaque origin is preferable per the principle of least privilege.
But that introduces the problem with the accounts fetch. Which is credentialed (cross-site somehow? and I guess that's okay because it's for login) and thus needs protection from random websites fetching it. One idea I had is that we use some kind of user agent origin, e.g., about:browser. Websites cannot spoof that. We'd reserve that origin for cases such as this as generally using an opaque origin is preferable per the principle of least privilege.
Ah, neat! I like this as an approach.
As alluded to above I can see allowing "
no-cors
" fetches to.well-known
URLs, although there too I think there's a big risk in that as we do more of that we might forget about the security lessons we learned elsewhere. E.g., if those fetches get attacker-controlled headers, do we ensure the header values cannot be used for attacks or will we forget? As such, I would say that needs a much wider discussion and perhaps it should have dedicated infrastructure in Fetch so it at least has a better chance of getting considered going forward.Now allowing fetches to arbitrary end points has those risks, but amplified, as the endpoint is even less aware as to what might happen.
Are you suggesting using cors
mode with origin: null
(or the about:browser
suggestion below) for these two fetches? Is the claimed benefit that changing the fetch headers used by the API (they are not attacker-controlled) later on might mean we are introducing new attacks on servers because the fetch would have been preflighted if we used CORS?
Those are the risks with respect to requests. With respect to responses there might also be risks, depending on the implementation strategy, and what we end up exposing over time through various side channels. (E.g., the Web Performance WG always seems keen to expose new numbers to websites and at least historically has not always respected the same-origin policy.)
If you are thinking about Resource Timing, I don't believe these fetches would be exposed there. They need to be opaque to the site using FedCM. Since the user agent ideally needs to consume these fetches outside of the renderer process, it seems hard for Resource Timing to accidentally expose these.
But that introduces the problem with the accounts fetch. Which is credentialed (cross-site somehow? and I guess that's okay because it's for login) and thus needs protection from random websites fetching it. One idea I had is that we use some kind of user agent origin, e.g.,
about:browser
. Websites cannot spoof that. We'd reserve that origin for cases such as this as generally using an opaque origin is preferable per the principle of least privilege.
That seems OK, although it is not clear to me what benefit this introduces over Sec-Fetch-Dest
. It might be beneficial in the spec to more easily encode how to treat these fetches, and from what I heard from @domfarolino it sounds like FedCM is not the only API trying to add user-agent-only fetches.
Is the claimed benefit that changing the fetch headers used by the API (they are not attacker-controlled) later on might mean we are introducing new attacks on servers because the fetch would have been preflighted if we used CORS?
That's my impression of Anne's comment, but yes I'd also like clarification there.
That seems OK, although it is not clear to me what benefit this introduces over Sec-Fetch-Dest.
I think with something like about:browser
we could use CORS and the failure mode goes from failing open to failing closed. The failure mode for Sec-Fetch-Dest
is fail-open because the onus is on the server to opt-out of the whole identity flow by checking Sec-Fetch-Dest
and returning some bogus/null response if the value isn't web-identity
; a server that forgets to check this is now "vulnerable". With CORS, the server has to affirmatively opt-in via ACAO. That's not a good strategy for us if we have to use null
, but if we get to use about:browser
, then there's a benefit. But all of this only seems beneficial if we're not treating the .well-known file as a global opt-in for the identity flow, which seems to be the world @annevk is envisioning?
and from what I heard from @domfarolino it sounds like FedCM is not the only API trying to add user-agent-only fetches.
Yeah, attribution reporting is currently doing something like this (https://github.com/WICG/attribution-reporting-api/pull/547) and I think ultimately we're going to have to discuss how to best use Fetch for these kind of UA-initiated requests, and some of the trickier bits like what should client
and window
be, and what are the full consequences of setting/not-setting these things. But that discussion should probably happen in whatwg/fetch.
and from what I heard from @domfarolino it sounds like FedCM is not the only API trying to add user-agent-only fetches.
Somewhat orthogonal to the point that @npm1 was trying to make, but just for completeness, there is a good amount of web platform APIs that use .well-known
files. The Change Password and First Party Sets specs come to mind (but there is probably a bunch more here), do we expect them to also require CORS for their .well-known
files? Do we feel like we need to make them use CORS too or do we feel like there is a criteria that makes us feel comfortable about those?
Perhaps more interesting than whether the .well-known
request itself uses CORS, is the fact that we're "loosening" the security characteristics of all requests that participate in the flow specified by the .well-known
file. I'm consdering:
In that if you decide to host a resource at a .well-known location you opt-in to a different security model.
...to read as "opt-in to a different security model" (for all requests/flows related to whatever .well-known
file we fetch). In other words, is the fact that your server has a particular .well-known
file enough for us to say that's an opt-in to a different security model.
Regarding the actual .well-known
fetches themselves, it's unfortunately impossible to tell what kind of requests are being made for the examples you mentioned :( Change password spec is pretty incomplete and doesn't ever construct a request or make a fetch, and FPS doesn't have a spec.
So I agree that using "Origin: browser" or something like that has fewer negative properties than "Origin: null", but I still don't see the advantage. I can promise you, Anne, that we will not allow the RP to specify custom headers and that we will keep the top-level domain manifest and the provider manifest as simple fetches.
I disagree that the well-known file is opting in to a different security model. It is the same model -- websites can't read responses, only the browser can.
One thing that might be interesting as adding a "safe-no-cors" mode to fetch, which does not do the CORS headers but still does the checks for safe method/headers and fails the request when it would otherwise trigger precheck. (on the wire it would still send "no-cors" for the mode)
but I still don't see the advantage.
The advantage is we can use CORS and not have to figure out how this extension to the same-origin policy may or may not make sense. On top of which we'd have to support it in perpetuity and ensure it doesn't break any promises made between people in the past.
Note that "safe-no-cors" on its own is no longer enough. You also need the guarantee that the response is not exposed to the process the website runs in. Again, for perpetuity, etc. (For "no-cors" we're creating https://github.com/annevk/orb and as such it would indeed not be adequate as cross-origin JSON would be blocked at the network layer.)
Wouldn't a specific content-type (like application/json-fedcm-manifest
) for the manifest.json, enforced by the browser, completely mitigate the no-cors issue, protecting servers from receiving unexpected non-preflighted requests?
@antosart I'm not sure I understand. It wouldn't help with the manifest fetch and there's at least one other fetch that happens in parallel with the manifest fetch.
@annevk but those are uncredentialed simple requests which the RP could anyway issue via fetch without triggering CORS preflights, no?
@antosart they could, but they would hit a network error due to https://github.com/annevk/orb. In fact, the specification would also hit this network error if it did not introduce a special mode of sorts. "no-cors" isn't exactly reliable infrastructure.
So they would still hit the server, but orb would block the response? Or would orb step in and block the request too?
They would hit the server. The only risk request-wise is that specifications sometimes grow in ambition over time as already discussed above.
A few of us (@annevk, @domfarolino, @martinthomson, @bvandersloot-mozilla, @npm1, @samuelgoto, me) met today to find a consensus on this topic. This is the sumnmary:
In terms of web exposed changes, the only change would be that IDPs need to check the Origin header instead of Referer, and we would send a different Sec-Fetch-Mode header.
Let me know if I missed anything / got it wrong. @bvandersloot-mozilla has volunteered to draft a fetch spec PR, thank you!
That sounds right to me. Regarding preflights, I believe we concluded that FedCM fetches specifically do not require preflights.
With regards to the new mode, we should make sure that the spec will let us send first-party cookies while also sending a third-party Origin header (fyi @bvandersloot-mozilla )
I've created a first draft here: https://github.com/whatwg/fetch/pull/1533.
An update based on the breakout that happened today. We aligned on @domfarolino's proposal regarding how to treat the accounts endpoint. It will be treated as same-origin, with a null client, and using only SameSite=None cookies. We also briefly discussed the suggestion to add a new header to our CORS request and aligned against it, see the CORS issue https://github.com/fedidcg/FedCM/issues/428.
That seems like a reasonable state for this. Sorry I couldn't be at the breakout.
Sec-Fetch-Mode seems purpose-built for this sort of thing. Adding another header field doesn't really help a lot.
(A server will naturally ignore either a new Sec-Fetch-Mode value or the Sec-FedCM-CSRF thing. The value of the former is that it will compress better and it reuses an existing mechanism.