Closed slightlyoff closed 9 years ago
/app/ could just embed an
(As for reasoning about URL paths, per the URL standard a URL path is a sequence, so we have that underlying concept already.)
googledrive.com
would benefit from something like this… or they should just do what everyone else does and give subdomains to each user.
We'd have to change the meaning of the default scope from "" "the current path minus any terminal location + "/""
If we did something like this, wouldn't it be better to enforce scope equal to, or a subset of "*"
resolved against the service worker script url?
Restricting scopes based on the URL of the current page can be trivially bypassed, either by using history.pushState to change the URL of the page (within the same origin), or by opening a new window/iframe at the desired (same origin) URL and then injecting JS to that frame.
It would however be possible to restrict scopes based on the URL of the Service Worker script (that's what I was suggesting on issue 224).
This wouldn't be trivially by-passable. The original URL of the loaded document would be remembered and checked by the algorithm. Unsure why you assumed otherwise?
That would stop you using history.pushState, but it wouldn't stop you opening a new window/iframe at the desired URL and then injecting JS to that frame, unless you're proposing that we follow the frame tree and window.opener hierarchy all the way back to the first page loaded from that origin?
It has always been possible for pushState() misuse to create requests that fall outside a given registration. Don't Do That (TM). The only fix would be to only allow origin-wide registrations, and we're not going to do that.
On Tue, May 13, 2014 at 2:34 PM, John Mellor notifications@github.comwrote:
That would stop you using history.pushState, but it wouldn't stop you opening a new window/iframe at the desired URL and then injecting JS to that frame, unless you're proposing that we follow the frame tree and window.opener hierarchy all the way back to the first page loaded from that origin?
— Reply to this email directly or view it on GitHubhttps://github.com/slightlyoff/ServiceWorker/issues/253#issuecomment-43016195 .
I think you said it best when you created this issue: "at some level, this is security theater, but is it still meaningful"
It would be nicer if we can get a better sense of what use-cases path-based restrictions achieve today. For example, in the Google Drive case, the user isn't allowed to upload arbitray JS. If user is allowed to upload and inject arbitrary JS, the attacks John points out already are ridiculously powerful. I guess the SW allows you to make a longer-lived attack. But this is already a problem for applications like Google Drive. (e.g., http://devd.me/papers/w2sp10-primitives.pdf)
More than security, I am worried about this restriction unnecessarily making serviceworkers onerous to use. This restriction assumes that slashes in URIs have some meaning but this isn't the case. For example, slashes in URIs don't need to correspond to actual folders and might be query parameters for all we know. Now, if user visits (via a direct link) example.com/param1/param2/, this page can't register a SW for example.com/param3/param4/ even thought they are all served by example.com/index.php. This seems wrong.
If we are really really worried about this, we can allow the website to opt-in to this behavior via some HTTP header based mechanism. But, I think, that should be solved as part of the broader question of how/what CSP policies would allow a security team to reason about and control service worker registrations.
I agree; both about path restriction being onerous and about them presenting a strange model that doesn't necessarily match any other restriction we impose anywhere in the platform.
@jakearchibald suggested a clever middle-ground for winnowing down the set of unintended serviceworker installations: require a valid script mime type. Combined with a a header (CSP extension?) that allows us to disable/prevent SW registrations, I think this covers a great deal of the risk. In the case where there's a site like googleusercontent or github.io, they should blanket-send the SW-disabling CSP rule. In less multi-user origins, the mimetype restriction can prevent transient content (a comment stream or a JSON/image document) from being abused to create longer-term pwnage of a site that legitimately wants to enable Service Workers but might worry about how they might be used to pry open the window of vulnerability.
WDYT?
On Tue, May 13, 2014 at 5:49 PM, Devdatta Akhawe notifications@github.comwrote:
I think you said it best when you created this issue: "at some level, this is security theater, but is it still meaningful"
It would be nicer if we can get a better sense of what use-cases path-based restrictions achieve today. For example, in the Google Drive case, the user isn't allowed to upload arbitray JS. If user is allowed to upload and inject arbitrary JS, the attacks John points out already are ridiculously powerful. I guess the SW allows you to make a longer-lived attack. But this is already a problem for applications like Google Drive. (e.g., http://devd.me/papers/w2sp10-primitives.pdf)
More than security, I am worried about this restriction unnecessarily making serviceworkers onerous to use. This restriction assumes that slashes in URIs have some meaning but this isn't the case. For example, slashes in URIs don't need to correspond to actual folders and might be query parameters for all we know. Now, if user visits (via a direct link) example.com/param1/param2/, this page can't register a SW for example.com/param3/param4/ even thought they are all served by example.com/index.php. This seems wrong.
If we are really really worried about this, we can allow the website to opt-in to this behavior via some HTTP header based mechanism. But, I think, that should be solved as part of the broader question of how/what CSP policies would allow a security team to reason about and control service worker registrations.
— Reply to this email directly or view it on GitHubhttps://github.com/slightlyoff/ServiceWorker/issues/253#issuecomment-43031014 .
sounds good to me.
Heck, regardless of whether we believe this is sufficient for fixing this issue (#253), I believe the spec should insist on the right mime-type for a SW script. The origin restriction doesn't amount for much without it.
Appcache may limit the scope of FALLBACK by the location of the manifest https://www.w3.org/Bugs/Public/show_bug.cgi?id=25699
@jakearchibald Seems like same argument against path-based mechanism that applied to SW applies to Appcache too.
We shipped path-based restrictions
Pace the discussion in #224, there is renewed interest in figuring out some way to make it less onerous for sites that host many user's content on a single origin to avoid URL-space bun-fights and malicious "takeover" using SWs.
At some level this is theater. Many other resources can be poisoned. We don't have any concept of "sub-origin" today and the SW design is the wrong place to construct such a thing.
So, noting the above, is it still meaningful to restrict scopes? E.g., the restriction might be that a visit to
https://example.com/app/index.html
would only allow registration forhttps://example.com/app/*
but nothttps://example.com/*
orhttps://example.com/otherstuff/*
.This restriction would be in addition to the other restrictions currently placed on SW registrations, namely: SSL-only, same-origin-hosted SW scripts, and 24-hour max caching before update pings sent to the server. It would likewise be additive to proposed mitigations against compromise (e.g., #224).
From the spec side, this has some issues:
"*"
"the current path minus any terminal location +"/*"
"/cc @phuu @jakearchibald @devd @sicking @shinypb @mikewest @annevk @abarth