Closed annevk closed 4 years ago
Thanks for proposing this! If I follow correctly, this would basically allow SharedArrayBuffer-enabled documents to load cross-origin iframes as long as the iframe requests use CORS (in the spirit of X-Bikeshed-Allow-Unisolated-Embed). This would imply that the cross-origin iframe's server would acknowledge that such data would be accessible to the embedding page, and thus it's safe to include in the same process even if SharedArrayBuffers or other precise time features make Spectre attacks more likely. (Meanwhile, browsers with support for cross-process frames could load such iframes in a different process without diverging in behavior from the user's perspective.)
I'm not 100% sure ad sites (etc) would go for this "full access" approach (where the embedding page could just as easily fetch the iframe URL and read its contents), but it does seem to make explicit that the data could be read with Spectre, and thus conveys the risks in a reasonable way. I imagine it might be sufficient.
I think this would resolve my main concern from the other thread (https://github.com/whatwg/html/issues/3740#issuecomment-436086642). @arturjanc, does this sound reasonable from your perspective?
Navigations should also never require a preflight, therefore only requiring modifications to the final resource on servers (and redirects, if any).
For my own clarity, I think this is what you're proposing:
Cross-Origin-Frame-Policy: cors
.<iframe src="https://cross-origin.site/">
.https://cross-origin.site/
, sending an Origin
and https://cross-origin.thing/
responds with Access-Control-Allow-Origin: https://embedder.site/
and Access-Control-Allow-Credentials: true
, we allow the embedding. Otherwise, we return a network error.Is that right?
It's not clear to me what capabilities this would expose. Like, does the embedder get DOM access to the embedee? That seems like something we'd want to avoid. As @csreis notes, this also seems to create the risk that page contents are directly revealed via fetch()
, which also seems like something we'd want to avoid.
CORS-RFC1918 took a different approach, forcing a preflight, but not forcing CORS access to the page itself. That might be an interesting approach here as well, as it would reduce the risk that page content would be inadvertently exposed to an embedder by making it opt-in (via the preflight response), but not directly exposing the page data to the embedder (by not requiring CORS headers on the non-preflight response itself).
It's not clear to me what capabilities this would expose.
It mainly would allow for process reuse on process-constrained environments, as stated in OP.
Like, does the embedder get DOM access to the embedee?
See last paragraph of OP.
As @csreis notes, this also seems to create the risk that page contents are directly revealed via
fetch()
It's not clear to me how much it's worth trying to distinguish that case from putting something in the same process. It seems it might give a false sense of security.
For opt-in, I like @mikewest's preflight-based approach from https://wicg.github.io/cors-rfc1918/#shortlinks
This should allow the owners of cross-origin resources to respond to OPTIONS
requests with something like Access-Control-Allow-Embed: true
without also allowing direct reads of response contents via fetch()
. I sympathize with the concern about giving server owners a false sense of security, but would prefer to avoid encouraging server owners to allow direct access to their responses (e.g. we don't want ad network resources to be directly readable via CORS since that would reveal interesting information about users).
There is a possibility that Spectre-like attacks would be able to exfiltrate the contents of the frame, but they would likely be less powerful and noisy; and, importantly, they wouldn't work against browsers with OOPIFs (whereas if we require regular CORS opt-in via Access-Control-Allow-Origin
then we would expose the contents also in browsers with OOPIFs where introducing the leak isn't necessary.)
A benefit of this approach is that it would enable identical behavior across browsers: on pages with Cross-Origin-Frame-Policy
all browsers would send a preflight on cross-origin iframe requests and would require the server to opt in. However, as browsers adopt OOPIFs the responses would become safe against exfiltration via speculative execution attacks, while still requiring the server to opt into embedding (which is okay because servers will have to do this in the short term anyway).
Also, stepping back a bit, do we even need the same
/ same-site
switch in this case? I think we can allow same-origin frames by default, and for any non-same-origin framing requests we could use the CORS / preflight approach to require resource owners to opt in. This would also be safer because we would prevent foo.example.org
from declaring itself eligible to frame bar.example.org
while getting access to high-res timers or other dangerous APIs, allowing it to potentially exfiltrate cross-origin contents from its sibling subdomain.
If we did this, then the header could just become Cross-Origin-Frame-Policy: 1
(or some other more descriptive name/value.)
Also, to push this idea as far as possible, could we even completely fold this into the Upgrade-No-CORS
header? Then the header would require sending all non-same-origin subresource requests as CORS and do the same for iframes as discussed above. It would also have to apply recursively to all frames, but this should be okay because, as outlined above, servers would have to opt in or otherwise the frame would not be loaded.
I think this might be conceptually simpler for developers while giving us all the security properties we need for the L2 mechanism.
Upgrade-No-CORS
is specifically named after Fetch's "no-cors" mode. Navigations use the "navigate" mode, which is similar, but different. Also, per your proposal navigations wouldn't use "true" CORS, they'd only preflight.So I think I agree with your plan, but we need a new name for the header. Perhaps Use-CORS
, with the explanation that for subframe navigations this means a specific kind of preflight only.
So, why not also use this approach for popups you want to cooperate with?
So while trying to explain this model to someone over lunch I realized this allows for escaping the "Use-CORS" restriction in browsers without process-isolated frames/popups.
attacker.example specifies Use-CORS and the correct opener policy. It loads collaborator.attacker.example in a frame/popup and that positively replies to the preflight. collaborator.attacker.example isn't restricted itself by CORS however can load all kinds of "no-cors" resources into the process.
It seems to me we need to require that collaborator.attacker.example also specifies Use-CORS.
I talked with @arturjanc, his assumption was that Use-CORS is inherited cross-origin in these cases, which is probably acceptable as the navigation response opted into it via the preflight.
Not discussed yet: if you restrict to same-origin, what are the implications for document.domain
(also raised at https://github.com/whatwg/html/issues/3740#issuecomment-399194225) and SharedArrayBuffer
? It seems those would not work then cross-origin, but same-site. It's not entirely clear to me if that's desirable. (It's ideal, but...)
I'm no longer entirely convinced we need the preflight.
I think instead the model should be such that once a browsing context group has its "Use-CORS" flag set, any resource navigated to within that group needs to have the Use-CORS
header set. And if not, the network layer will return a network error. I'm not entirely sure if we should require redirects to have this header set or not. (A redirect can be navigated to if it doesn't have a Location
header or the value of that header cannot be parsed. In that particular case it definitely needs to have the header set, but I'm less clear on when we simply follow it to somewhere else.)
Then, there's the question of credentials. Other than with fetch()
(which defaults to "same-origin" for credentials), "no-cors" fetches will always include credentials across origins. So we need to at least support the equivalent of HTML's crossorigin="use-credentials"
(this made me think that Cross-Origin: use-credentials
as header might not be too bad). Whether we also need crossorigin="anonymous"
is less clear, but that would allow for a less complicated CORS setup.
If we allow variance in credentials, it probably does not make sense to require it to match across documents. It's reasonable for different documents (esp. cross-origin) to have their own "no-cors" credentials policy.
For a moment I was worried about service workers and the cache API being able to introduce opaque responses into "Use-CORS" documents. However, this concern is probably unfounded. A service worker is handled by it not being able to return opaque responses to "cors" fetches (we'll upgrade before hitting the service worker). The cache API will need to be restricted from returning opaque responses in "Use-CORS" environments somehow. Currently you cannot do anything with such responses anyway in documents so maybe that's enough (assuming an implementation that leaves the bytes in the "storage process" until requested), but we'll need to be cautious going forward. An alternative is to prevent them from being returned altogether when the top-level flag is set.
When does Use-CORS
/Cross-Origin
take effect:
Cross-Origin-Opener-Policy
set.Cross-Origin-Opener-Policy
requirement doesn't apply to further navigated resources loaded in that browsing context group. For those only this new second header matters (except for top-level browsing context navigations with a non-matching Cross-Origin-Opener-Policy
).As for document.domain
:
document.domain
and to continue to key agent clusters on sites. Same-site resources can only be attacked if they opt in via CORS or this new second header though (assuming they're navigable).Cross-Origin-Opener-Policy: same-origin ...
when using this new second header and change the browsing context group's agent cluster keying such that the key is now origin, effectively disabling document.domain
. Unless folks are particularly motivated to do this exercise, this seems unlikely to happen. Attempts at making document.domain
worse in the past have largely failed and tightly coupling the worsening with important new features puts the new feature at risk.A thing we haven't really discussed or at least written down in these threads is how SharedArrayBuffer
is enabled. I propose that SharedArrayBuffer
is always there, but only agent clusters with a flag set allow it to be messaged between agents. This means that the ECMAScript standard can continue to say it's always exposed and HTML (the host) will impose the limitation on usage. By not allowing it to be messaged it's effectively equivalent to and no more dangerous than ArrayBuffer
.
Making StructuredSerializeInternal
throw ("DataCloneError") when it's invoked in an agent cluster that wasn't created under the right circumstances should be sufficient for this I think.
Feedback and attempts to address it:
Cross-Origin
would be used for all responses so if you don't consider the possibility of being framed, someone might end up framing you. It seems somewhat reasonable, but I'm a little wary of adding this additional complexity. (Remember that currently Chrome requires none of this.) We don't have to add allow from *
necessarily as you could echo the origin value after allow-from
(similar to what we require with CORS). Potential issues:
same-origin
is spelled sameorigin
(case-insensitive too) and there's no same-site
as we have elsewhere.allow-from
.While working on #4284 I realized that non-auxiliary top-level browsing contexts can also get assigned external state:
Cross-Origin-Opener-Policy
is specified we should get rid of the name. This has no negative consequences that I can think of. The site itself can still set it.Cross-Origin-Opener-Policy
set we run into this.)So, given that inheriting sandboxing flags is such a niche case (can only happen when opening a popup from a sandboxed frame with allow-popups
), I think we should instead network error in that case, such that when you create a new browsing context group it always starts out without external state. That seems like a much safer long term design. (If this turns out to be prohibitive for some unlikely reason we can always stop returning a network error at that point and inherit the sandbox flags after all.)
Just to clarify, are the name and sandboxing flags comments mainly about Cross-Origin-Opener-Policy
(#3740) and not Cross-Origin
? I didn't see a Cross-Origin
connection to them at first glance.
I'm not opposed to clearing the name when we do a replacement. Chrome scopes names to the browsing context group anyway, and it seems reasonable not to import an old name when we create a new browsing context group.
I'm also ok with an error in the sandboxing flags case, with the potential to relax that later if it interferes with sites in practice.
Can a Use-CORS
page contain an iframe to a cross-origin page that doesn't have Use-CORS
? What happens if a SharedArrayBuffer is postMessaged to the iframe?
Update: BroadcastChannel and the service worker clients API creates the same problem with same-origin pages.
https://gist.github.com/annevk/17f580379c45802d5c3aef5a8fd53c7d has more details on the processing model. Feedback welcome!
@jakearchibald the iframe
case would result in a network error for the frame. BroadcastChannel
does not pose a problem as those pages would be in different agent clusters (they'd get the messageerror event). Service workers are also in their own agent cluster.
Edit: Updated link
I'm wondering if it would be possible to extend this with a mode which simply set all fetch requests' credentials modes to 'omit'
, without also upgrading to CORS?
Cross-Origin: omit
This could allow sites to adopt the Cross-Origin
header to enforce that they do not ever request any user-specific data from third parties, but would still be able to link to anonymous public resources, cache images and scripts on CDNs, and preserve the more-or-less free embedding that the web has always had.
I had written up a proposal along those lines here a few days ago, which seems very similar in spirit to this.
Thanks for mentioning, @clelland! I think the credential-less mode is worth considering, as it would allow sites to pull in effectively public third party subresources without needing CORS on them, and thus impose fewer restrictions without giving up much of the security value. (Presumably documents could optionally request credentialed subresources with CORS if they wanted them.) I also imagine that would be easier to eventually enable by default than CORS-only, and using it here for enabling precise timers might be a step in that direction.
The main hole is probably intranet resources, but maybe something like RFC1918 can help (cc @mikewest)?
What are others' thoughts on full CORS vs credential-less requests?
Adding @bzbarsky and @ehsan, who brought up similar ideas about a credential-less mode (or default) in the past. Requiring it to enable precise time (perhaps instead of CORS for all cross-site subresources, as we've been discussing here?) might be a nice step towards making credential-less subresources be the default, which would help cover the cases CORB misses today.
I like this idea -- it would help with adoptability on pages with many cross-origin dependencies where it's not easy to get 3p resource providers to enable CORS.
I commented in the document, but I'll repeat here:
Cross-Origin
), it's unclear how you'd deal with storage (Indexed DB, etc.).
- At the moment it doesn't address nested or auxiliary browsing contexts. That's a (big) problem for browsers that have a process-per-browsing-context-group for the foreseeable future. (I also understood this to be the case for Chrome for Android.) If you did require inheritance of sorts into those (similar to the proposed
Cross-Origin
), it's unclear how you'd deal with storage (Indexed DB, etc.).
I would expect to still require nested and auxiliary browsing contexts to opt-in to be included in a precise-time-enabled process, as we've been discussing here. We were requiring them to send the Cross-Origin header (or something like it?) to opt-in, rather than requiring them to use CORS or depend on credential-less requests. (I do agree that using a credential-less request for an iframe wouldn't protect it, since the credential-less document might load cookies or local storage into the process.)
This does seem to require inheriting the credential-less behavior throughout the browsing context group as well, just as we were doing for CORS, so that attacker1 can't iframe attacker2 and start requesting subresources with credentials.
I think the main change being proposed is to use credential-less requests for JS/CSS/images/media/etc (not iframes) rather than using CORS for them, to make it somewhat less disruptive.
And yes, intranet seems like the main tradeoff to me, so any ideas there are welcome.
Did you consider requiring HTTPS for all fetches within the browsing context group? I think that would rule out a substantial amount of "intranet"-based resources.
Requiring HTTPS for fetches is an interesting thought for dealing with intranets, since most intranet servers probably wouldn't respond. It feels like it's going a bit against the motivation of the credential-less mode, though-- that mode is attractive because it's easier to adopt (than CORS-only), letting pages pull in subresources that are easy for the attacker to get to anyway. Requiring HTTPS for all fetches would make the mode harder to adopt for any sites still using HTTP subresources (especially for public, non-credentialed HTTP requests). It seems unfortunate to disallow the most public subresources out there in order to make it hard to access intranet servers.
Then again, maybe it's worth considering as a point in the space (with a slightly different protection vs adoptability tradeoff) if there isn't a way forward on intranet ideas like https://wicg.github.io/cors-rfc1918/.
You might know something I don't, but the issue about IPv6 seems unlikely to ever get resolved and intranets use public IPv4 addresses too.
The right approach seems to depend a lot on the deployment model we have in mind for this feature.
If we're thinking of it as an opt-in to allow the use of SABs, we can likely enforce the HTTPS restriction because developers who want to use them should be able to make their applications available over HTTPS and avoid mixed content. Arguably, SABs and other powerful features we'll gate on this should require a Secure Context -- applications which want to use them are by definition under active development and getting developers to use secure transports aligns with everything we've said over the past decade :)
If we're considering enabling this by default then credential-less mode is certainly more web-compatible (though still likely to cause breakage, which we should quantify); @csreis to clarify, are you suggesting to enable credential-less mode always, or only in cases where the developer wants to enable SABs & co?
For the latter case, the HTTPS restriction should be fine adoption-wise. To roll this out always (not as something to gate SABs) we could lift this restriction since pages can already request cross-origin no-cors
resources. This would be security-positive: it would protect cross-origin responses for all public resources, and while it wouldn't protect intranet resources, it also wouldn't provide attackers new capabilities -- since we'd still require HTTPS for SABs.
Yes, I suppose we could keep the HTTPS restriction for SAB access even if we don't use it as a potential future default (though indeed I was hoping we could phase out the extra requirements by making them defaults down the road).
On that note, I've been working on the following doc, to share some of our reasoning about why we might want to move towards new future defaults and use this sort of header as a stepping stone (depending on which requirements we settle on): Long-Term Web Browser Mitigations for Spectre
I just shared it on the isolation-policy@chromium.org list here. Hope it's helpful for thinking about what we should require here.
Assuming there's indeed RFC 1918 for IPv6 Mozilla is okay with advocating usage of that (and IPv4 equivalent) for private network devices and essentially giving up on IP-authenticated protections for HTTPS resources beyond that. (Requiring HTTPS still seems like a good defense-in-depth strategy and in line with our shared goal of moving the web to HTTPS.)
If we do go down that path, I suspect we still want "enforce CORS mode" to get your credentials back?
So we'd have:
No-CORS-Policy: same-origin-credentials
No-CORS-Policy: cors-include-credentials
Notes:
Cross-Origin
header proposal so it's useful for browsers with process isolation at the browsing context group level.This seems like a pretty reasonable deployment strategy to me, especially when we can eventually combine it with more restrictions on the web's ability to talk to whatever we consider internal.
Assuming there's indeed RFC 1918 for IPv6
I commented in the other bug, but I'll paste it here, as more folks are probably paying attention:
""" My plan for shipping [IPv6 restrictions in CORS-RFC1918] in Chrome would be to have a default mode that defined intranet in terms of the user's locally-defined network prefix, and a set of configuration options for administrators to define more granular ranges. My hope is that the prefix-based approach would exclude the printers and routers on my local network (via their local address), which is the main risk for home users, and responsible sysadmins could take responsibility for the rest. :) """
Unfortunately nothing in the name conveys HTTPS everywhere. I think that's acceptable, but open to suggestions.
SCORS?
Mozilla would really like to drive this to a conclusion somehow. Our current thinking is still the two modes outlined in https://github.com/whatwg/html/issues/4175#issuecomment-471936918. We'd require full HTTPS. Additionally we'd block accessing private and local IP addresses and potentially implement CORS and RFC1918 in the future for the "same-origin-credentials" mode. (We would not do anything with that by default for now.)
Also, to restate the plan for SharedArrayBuffer
is to have it enabled by default, irrespective of these headers, but make certain methods throw for it (e.g., postMessage()
) unless the agent cluster's high-resolution timer flag is set. This has some risk (sites assuming that if SharedArrayBuffer
is there they can postMessage()
it), so we might have to change strategy if it becomes a problem. Hopefully usage is not high enough yet for that to be one.
So, what do Apple and Google think about this? Does it come down to bikeshedding the name and formalizing all of this, or is there anything else that's blocking this?
In a discussion between @mystor, @naskooskov, @creis, @erik-anderson, @michaelkleber, and a few others at BlinkOn today, we tossed around a variant of this plan that's worth discussing: what if we leaned a little harder upon Cross-Origin-Resource-Policy
instead of relying on CORS? That is, what if we enforced Cross-Origin-Resource-Policy: same-site
by default for all subresources and frames embedded in a context that wanted access to high-resolution timers, and introduced a new cross-site
token to enable resources to expose themselves to additional risk if they'd really like to do so?
This approach seems like it might address the same fundamental goal of ensuring that resources aren't exposed to the risks of potential inclusion into someone else's process by default, without the additional risks of over-enthusiastically opening up CORS settings. It also seems to have a fairly low deployment cost: framed documents and embeddable resources can make a high-level decision about the ways in which they can be embedded, and represent that decision via a simple, static response header. Sites that don't care to do so (including intranets) will be protected by default.
To make this work, we'd need to apply CORP to nested navigation requests in addition to no-cors
requests, introduce a mechanism to enforce the new restriction on requests initiated from a given document and its descendents (Cross-Origin-Resource-Policy-Policy: ...
? :) ), and create a new cross-site
value (or origin safelist?) for the CORP header.
WDYT? Does this have reasonable properties? Or does credentialless/CORS-only offer mitigations that this approach would miss out on?
That's an interesting idea. What comes to mind:
cross-site
?
cross-site
?same-origin
could be an acceptable default?What other APIs can be build on top of this primitive that we'd be comfortable with? (E.g., memory APIs have been mentioned, various timing APIs were hoping to use this as some kind of primitive.) And what do we advice developers about the contents of resources that are declared
cross-site
?
These are good questions, and I could imagine us resolving it either way. On the one hand, the side-channel attacks that memory APIs enable seem similar in kind to the attacks Spectre relies upon. It might be reasonable to treat the CORP header as something of a "I accept the risk of side-channel attacks for this resource." flag. On the other, CORS has a more specific origin-based opt-in and clearly demarcates a resource as explicitly legible by its loader, and maybe that granularity and explicitness is a better fit?
I think I'd be comfortable with the former "Enable side-channels" opt-in, as it would only enable attacks against those resources that explicitly allow themselves to be embedded, which seems to reduce the attack surface somewhat reasonably. I'd like to hear opinions from other folks here!
It would leave same-site-but-cross-origin resources in the cold, but perhaps
same-origin
could be an acceptable default?
I could live with that! same-site
matches the level of isolation that we've achieved so far in Chromium, and is probably a bit more deployable insofar as it would affect fewer resources, but it probably makes sense for us to be a little more aggressive with the default here.
I'm a little worried about bringing navigations (child and auxiliary, FWIW)
Yes, we'd need to do this for popups as well, thanks for mentioning that.
into the CORP mix as we somewhat explicitly decided against that. As long as we only check against the "source browsing context" (which should really be "source document") it might be okay though. I.e., no traversal of the document tree.
Especially in the context of the Cross-Origin-Resource-Policy-Policy
declaration, checking only the source document seems reasonable (as that document would either itself be the top-level, or would have opted-into whatever level of protection allowed it to be embedded).
After talking with @mystor again earlier in the week, this still doesn't feel like a bad solution. In the hopes of solidifying our understanding of how a CORP-only mode might work, I sketched it out in a little more detail at https://github.com/whatwg/fetch/pull/893. It would require some work in HTML that I only hand-waved into existence, but the mechanism should be clear.
WDYT?
Presumably this would also still require recursive declaration of Cross-Origin-Resource-Policy-Policy
¹? But now you'd also have to set Cross-Origin-Resource-Policy
to an appropriate value which has some nice properties that prior proposals did not have (better distinction between embedder and embeddee).
Modulo the long term concerns about this ending up meaning the same thing as CORS minus some of the protections I rather like it. If Chrome is supportive I suspect we should go for it.
¹Bikeshed: High-Res-Time: T
.
Presumably this would also still require recursive declaration of
X-Bikeshed-Whatever
?
Recursive declaration or just cascading/inheriting the setting from one's embedder/opener? We certainly need one of the two.
Modulo the long term concerns about this ending up meaning the same thing as CORS minus some of the protections I rather like it.
I actually see more risk than protection in requiring CORS. Especially in the short term, I worry that CORS exposes more detail than we need to in order to obtain the core security properties we're interested in here (namely that victims must opt-into potential victimhood). There's a meaningful distinction between "You, vague acquaintance, may embed me." and "You, dearest friend, may read me." I'd like to maintain that aspect of the status quo.
This seems like a good first step. If it turns out that we need more steps, we have credentialless/CORS to fall back on, and we also have the ability to add origin-level specificity to CORP (which I know @arturjanc wants us to do anyway) so that embedees can make more granular decisions.
If Chrome is supportive I suspect we should go for it.
@csreis and @naskooskov were in the discussion earlier this week and seemed on board with the general direction. Between us, I think we can say that Chrome's supportive.
Bikeshed: High-Res-Time: T.
I'll defer to you on the header name¹ (but ...-Policy-Policy
makes me grin. :) ).
Nit: I think structured headers landed on ?1
/?0
as the boolean serialization.
¹I'll note, though, that I'm not sure that focusing on high-res timers is the right way to go, as the attacks we're all worried about are certainly possible without them. The declaration here is that each embedded resource should consent to the embedding, with all its risks and rewards. Perhaps we could frame the concept within that vocabulary? Embedee-Consent: require
? That's not great, but maybe directionally interesting?
Given that we require CORP I'm not seeing much difference between recursive declaration or inheritance, except for B saying it's okay with A fetching it (CORP cross-site
), but not using it as a document (no TBikeshedD). This difference might matter when:
localStorage
or other data related to B that wouldn't be otherwise exposed.postMessage()
end point in CORP-returning scenarios. (This is farfetched.)So I'd be slightly leaning towards requiring recursive declaration to be on the safe side.
The header name suggestion wasn't so much about attacks, but more about what class of features a site obtains. Though for memory inspection it wouldn't make much sense. I don't have a strong opinion here.
(Ah, https://httpwg.org/http-extensions/draft-ietf-httpbis-header-structure.html#boolean is newer than https://tools.ietf.org/html/draft-ietf-httpbis-header-structure. Good to know.)
As a meta-note, I think the main thing we're looking for here is some opt-in header that would allow us to give a page SABs, etc, without allowing that page to steal cross-origin data with these powerful features. Developers will be compelled to make their applications compatible with this mechanism to enable functionality they need (as opposed to most other security features, which are rarely a must-have, which means we need to pay more attention to the ease of adoption). In that sense, whether we base the opt-in on CORS, CORP, or something else is probably less important than getting vendor agreement.
More concretely, I'd agree that something like CORP-Policy: ?1
to enable this and CORP: cross-origin
for resource opt-in could be reasonable, but there are a few things to consider.
Otherwise, we'd create a class of attacks where an attacker-controlled page without CORP-P
loads no-cors
resources and then navigates to a same-origin document with CORP-P
, which gives it SABs. Unless there is a process swap on same-origin navigations, this would result in the document being able to extract data in its address space, which could include cross-origin resources which didn't opt in. This is an even bigger problem for browsers without OOPIFs, and one of the reasons COOP needed to pay attention to potentially create a new browsing context group when navigating.
CORP: cross-origin
on auxiliary contexts. COOP will make sure that cross-origin popups will be in a separate process, so we only need opt-in for iframes (whose relationship with the embedder is not affected by COOP). This could facilitate adoption because we wouldn't place additional restrictions on auxiliary browsing contexts and require their opt-in.
CORP: cross-origin
seems fine as an opt-in for resources, but I worry about it applying to frames.The reason for this is that an iframed document which sets CORP: cross-origin
will expose, for example, all resources it loads via CORS (in browsers without OOPIFs). The providers of these resources trust the domain of the framed document, but didn't agree to have their resources exposed to arbitrary external sites which embed the trusted document which requests them.
For browsers without OOPIFs this feels a bit like a footgun: setting CORP: cross-origin
on a renderable document could undermine security of unrelated resources requested by that document. OTOH trusting a given origin via CORS already assumes trust that it will be free of security bugs (e.g. XSS), or otherwise the resource may leak, so perhaps this is fine. But we should at least clearly document this.
Security-wise, shifting from choosing to trust the requesting origin (via CORS) to allowing everyone to access a certain resource is somewhat scary; it steers developers towards opening up global access to their resources which are fetched cross-origin, without any way to restrict who can access them.
Adoption-wise, this is more deployable than requiring CORS, but less deployable than credential-less mode; it requires explicit opt-in also for non-authenticated resources (e.g. loaded from CDNs) which doesn't seem necessary.
The last two points aren't as big of a deal -- this proposal addresses some concerns about CORS at the expense of introducing others, but it also gives us a path forward. If necessary, we can follow up with the extensions @mikewest mentioned in https://github.com/whatwg/html/issues/4175#issuecomment-482557335
If non-Chrome folks are okay with (3) and we require COOP, then this seems like a reasonable approach.
A naive question; What is the difference between a response with CORP: cross-origin and a response with proper CORS header(s)?
In terms of security: CORS allows the requester to see the contents of the response, whereas CORP: cross-origin
would permit the resource to be embedded in a cross-origin page without allowing that page to directly see the resource's contents.
But APIs such as the memory-measurement API is blocked because it can be used to sniff the contents of no-cors resources. If we allow to use such APIs for pages containing CORP: corss-origin responses, doesn't that mean we allow developers to see the contents of such resources?
With CORP: cross-origin
the page which loads the resource (and e.g. uses the memory measurement API) can infer some information about it -- in this case, its size -- but cannot obtain the actual contents of the resource. I believe @mikewest considers direct access via CORS to be qualitatively different from side channels which give the page only limited information about the resource; see this comment.
Note that the page that wants to use the memory measurement API would only be allowed to load resources which explicitly allow being loaded by such pages -- the resource would need to set a CORP: cross-origin
response header or otherwise the browser would block the load.
I see - I'm not sure if the distinction will remain meaningful if we continue to add the same kind of APIs (or, will we need a stricter restriction for even riskier APIs?), but anyway thank you for the pointer.
We'll likely still need the page to set COOP to enable SABs.
Otherwise, we'd create a class of attacks where an attacker-controlled page without CORP-P loads no-cors resources and then navigates to a same-origin document with CORP-P, which gives it SABs.
I agree that we need to commit those two pages into separate processes. COOP might be one way of doing that, another might be considering CORPP in the process allocation logic (e.g. {https
, example.com
, 443
, CORPP: Reqiure
} != {https
, example.com
, 443
, CORPP: None
}). I think either would be possible in Chrome's navigation code (but @creis will have more informed opinions on that specific point than I).
If we require COOP, then we probably don't need CORP: cross-origin on auxiliary contexts.
COOP will make sure that cross-origin popups will be in a separate process, so we only need opt-in for iframes (whose relationship with the embedder is not affected by COOP). This could facilitate adoption because we wouldn't place additional restrictions on auxiliary browsing contexts and require their opt-in.
If requiring COOP simplifies the logic, great.
CORP: cross-origin seems fine as an opt-in for resources, but I worry about it applying to frames.
The reason for this is that an iframed document which sets CORP: cross-origin will expose, for example, all resources it loads via CORS (in browsers without OOPIFs). The providers of these resources trust the domain of the framed document, but didn't agree to have their resources exposed to arbitrary external sites which embed the trusted document which requests them.
I'm not sure any proposal on the table satisfies this concern for browsers without OOPIFs. Whether we require CORS or CORP or any other opt-in for credentialed access to subresources, they are opting into dangerousness by allowing themselves to be embedded in a document which itself may allow embedding.
- Security-wise, shifting from choosing to trust the requesting origin (via CORS) to allowing everyone to access a certain resource is somewhat scary; it steers developers towards opening up global access to their resources which are fetched cross-origin, without any way to restrict who can access them.
As I noted above, I worry about the inverse of this concern. I expect many developers of embeddable widgets (read: ads) to do the simplest thing possible when confronted with a requirement to do something in order to support games that themselves require tools like SABs. The simplest thing possible, of course, is just to reflect the Origin
header (as I sincerely doubt that any advertiser is going to create a safelist of the thousands upon thousands of sites a given ad could appear in). That scenario seems more likely to me to leak user data by making personal information directly legible via JavaScript.
- Adoption-wise, this is more deployable than requiring CORS, but less deployable than credential-less mode; it requires explicit opt-in also for non-authenticated resources (e.g. loaded from CDNs) which doesn't seem necessary.
Requiring CORP for all subresource loads has the advantage of dealing with intranets and other position-on-the-network authentication schemes.
- We'll likely still need the page to set COOP to enable SABs.
COOP might be one way of doing that, another might be considering CORPP in the process allocation logic.
I'd prefer to decouple resource loading restrictions from window-based restrictions. It will already be difficult enough for developers of non-trivial apps to deploy COOP and CORP individually (except for CORP: cross-origin
which we shouldn't encourage in general), so adding window-based restrictions to CORPP seems to conflate two classes of protections, making the mechanism harder to understand and use.
Especially given that COOP is a valuable mechanism that developers should use outside of the context of high-res timers / CORPP, I see some benefit to settling on it as the recommended way to provide browsing context isolation rather than adding similar functionality to CORPP.
- CORP: cross-origin seems fine as an opt-in for resources, but I worry about it applying to frames.
I'm not sure any proposal on the table satisfies this concern for browsers without OOPIFs.
You're right. I think we could make this no worse than the past proposal by saying that if you set CORP: cross-origin
on a document that contains private data you should make sure that this document cannot be framed by untrusted sites by using X-F-O / frame-ancestors.
Requiring CORP for all subresource loads has the advantage of dealing with intranets and other position-on-the-network authentication schemes.
I thought we had (mostly) agreed that the credential-less mode was also okay in the context of local network resources if we made it require HTTPS. This would be a problem for switching to credential-less mode as the global web default (since it would break cross-origin fetches over HTTP), but for the opt-in "give me SABs" feature it would be acceptable -- developers of such applications would have to avoid mixed content, which is what we want anyway.
@arturjanc: Is the following a reasonable summary?
If so, great. Ship it. (Also, I'm about to disappear for ~2 weeks, so please don't block this conversation on me. :) )
I'd prefer to decouple resource loading restrictions from window-based restrictions.
That seems like a reasonable desire, and I can certainly see COOP as being a good solution for auxiliary browsing contexts. I'm less convinced that it's effective for framed documents as it seems to require OOPIF, but there's value in consistency, and if we collectively believe that it's better for frames to be consistent with popups than with subresources, great! I'm happy either way.
It will already be difficult enough for developers of non-trivial apps to deploy COOP and CORP individually (except for CORP: cross-origin which we shouldn't encourage in general)
I think we should be honest here: CORP: cross-origin
will be the single most common variant of the header, because resources that don't expect to be embedded won't send the header at all, and resources that do expect to be embedded will send cross-origin
both because it's the simplest thing to do (see above), and because we aren't providing any more granular options (see https://github.com/whatwg/fetch/issues/760).
As you noted above, that explicit opt-in is the high-order bit. CORP exists, and slots into the opt-in role pretty cleanly IMO.
I think we could make this no worse than the past proposal by saying that if you set
CORP: cross-origin
on a document that contains private data you should make sure that this document cannot be framed by untrusted sites by using X-F-O / frame-ancestors.
Until we have more granular options in CORP, I agree that we should recommend combining it with frame-ancestors
(as XFO doesn't have origin-level granularity in all browsers).
I thought we had (mostly) agreed that the credential-less mode was also okay in the context of local network resources if we made it require HTTPS.
I got the impression from @mystor that they weren't happy with that solution generally, and I got feedback from folks like @ericlaw1979 that he'd be deeply unhappy with the result that HTTPS would be riskier for an intranet deployment than HTTP, as that creates weird incentives for encryption.
I like that CORP makes this choice explicit (and that because it requires explicit declaration, 99% of resources won't do it, and will therefore be protected).
This would be a problem for switching to credential-less mode as the global web default
This is certainly the default I'd like us to get to over time. I think it's the right direction for the web to move. But I don't think we should block features like SABs on getting that mechanism right and figuring out a deployment story.
I don't think COOP is sufficient to handle popups, unlike what I initially thought when Artur asked me. At least, the latest iterations of the idea always required both COOP and TBD. And TBD always needed explicit opt-in for frames and popups. And the reason TBD needs that is that is because folks wanted to use COOP's unsafe-allow-outgoing as well as TBD. If we require unsafe-allow-outgoing to not be specified it works, but that might prevent some use cases we initially wanted to cater to.
https://github.com/whatwg/html/issues/3740#issuecomment-433945551 sketches out v1 for the various headers needed to enable
SharedArrayBuffer
and friends.At Mozilla we think we'll quickly need to address a need @arturjanc and @csreis hinted at. Being able to have cross-origin frames, either in the same process (e.g., because it's a process-contrained environment), or in a different process.
Our idea around this would be to add a new keyword to the
Cross-Origin-Frame-Policy
header:If the
cors
keyword isn't set the v1 semantics apply, and cross-origin/site navigations result in a network error. If thecors
keyword is set, the CORS protocol semantics apply to frame navigations. Judging from https://wicg.github.io/cors-rfc1918/ this could mostly be done through modifications to Fetch, which makes this less difficult than I initially anticipated. Navigations should also never require a preflight, therefore only requiring modifications to the final resource on servers (and redirects, if any).A risk here for the embedder is that the embedded could redirect or navigate to an attacker. https://w3c.github.io/webappsec-cspee/ and sandboxing can be used to mitigate this, similar to how you'd combat XSS in your own document.
The short term advantage is that we could have something that works in all browsers more quickly, the long term advantage would be potentially saving on resource usage. And more speculatively this kind of trust relationship might also be beneficial to other APIs.
We'd like to implement this shortly after or together with v1.
Note that none of this has an effect on the
WindowProxy
/Location
same origin-domain check. That will continue to consider such frames as being in a different origin.cc @whatwg/security @rniwa @tomrittervg