Closed tofumatt closed 10 years ago
Service Workers are planned to only cache resources loaded from HTTPS URLs by default
You can cache HTTP resources (although you may get mixed content warnings, depending on how you use them), but your origin must be served over HTTPS, and your service worker script must be on that origin.
it means that any developer of a web app that uses them needs to buy an SSL cert
https://www.startssl.com/ offers free certs
It's particularly important in the context of Open Web Apps... Open Web Apps allow only one app to be installed per-origin
Meh, I'm not really interested in packaged apps using web technologies, ServiceWorker and other specs extending it bring a lot of this functionality to the real web.
I would like to see SW work on HTTP too, but wanting it isn't enough, we need to find a way to overcome the security issues. Eg, with ServiceWorker right now, if you connect to my café's wifi network and visit any HTTP page, I can MITM you and throw in iframes/redirects to make you request any other urls. If those urls are also HTTP, I can hijack those and install serviceworkers. I now own those origins, even once you're back on "safe" wifi.
HTTPS prevents all that.
Some of the above is possible with appcache already, which is pretty scary, and it's not a problem we want to make worse.
At the moment, the only way I can see it working is:
This means a captive portal (which tend to redirect HTTP requests) will unregister your HTTP serviceworker. We're not really left with anything reliable or useful :(
with ServiceWorker right now, if you connect to my café's wifi network and visit any HTTP page, I can MITM you and throw in iframes/redirects to make you request any other urls. If those urls are also HTTP, I can hijack those and install serviceworkers. I now own those origins, even once you're back on "safe" wifi.
Isn't that also true of AppCache?
If so, doesn't that argument fall flat unless we require the same thing for AppCache (and change the spec and implementations accordingly).
Edit: @jakearchibald, you kind of address that in the above comment. Sorry for the noise.
Yeah, it's true of Appcache, and though I'm all for building a better, more secure web, this again limits the ability of a user to, for instance, use a free and quite scalable service like GitHub Pages to host their app. If we can load HTTP resources from an HTTPS page (even with a warning), why can't we simply have an HTTP origin with a warning?
I'm not talking about "packaged" apps in the strictest sense (which frankly I'm not interested in either) -- the web apps I'm interested in load as web apps in a tab, an iOS Homescreen app, or with a web app manifest. I am concerned about the last part, but I concede it's a point that's relevant only to Mozilla and Firefox Today™.
Treat any non-200 response as an immediate SW unregistration This worries me simply as it's the state of appcache today, and is a pretty major bummer.
I scenario of the bad wifi is a really great example, but I wonder if some developer validation of the contents would help here? For things like an image file fetched with service workers, simply unregistering that worker and saying to the user "This data looks bad, fetch it again" seems fine--even just based on the content-type.
But for registering JS files--a common use case I'd think--can the developer not attempt to run some code using it, then roll back to the previous worker (if one exists)?
This is a problem with the web, I agree. HTTP sucks. But that seems like a transport layer problem we shouldn't be trying to solve with browser technology. It effectively limits the deployment of Service Workers without actually fixing the underlying nature of the web--which still sucks. I feel like with HTTPS-only-origins + Service Workers, we now have two problems.
why can't we simply have an HTTP origin with a warning?
What would that warning be? "Everything you're looking at may be compromised by a MITM because we've allowed a ServiceWorker to be installed"?
I scenario of the bad wifi is a really great example, but I wonder if some developer validation of the contents would help here?
How would this work?
But for registering JS files--a common use case I'd think--can the developer not attempt to run some code using it, then roll back to the previous worker (if one exists)?
How can we tell the difference between the developer and the MITM?
Hello Jake, our paths cross again... cue evil laughter in the background
I am not totally sure I understand the security implications past the MITM possibility either, but I am worried that you are trying to solve things on behalf of the developer, limiting what they can do to a narrow range of options and, as @tofumatt mentions, ends up in web apps not getting parity with native apps.
I too would vote for dropping the https requirement, and I have no clue of how--that's the job of a standardista. My job as developer is giving feedback on this API and as it is presented so far, I find it hard to adopt. While AppCache is flawed in its own douchebaggie particular way, SW is flawed in its own awful elitist https-only club way too :-(
Fair point, no. There would be no warning, but that’s just as it is today on HTTP. Again: the web being inherently insecure is neither something Service Workers alone can fix nor is something new to service workers. Eschewing HTTP doesn’t improve the web, it limits what users can experience on it.
Developer validation is more the developer trying to run some code, and if it not working right from the get-go (as the service worker downloaded a wifi HTML welcome page), they roll back to the last registered version. This part is me blue-skying.
This does not prevent against malicious man-in-the-middle; only against things like wifi hot spot things.
HTTP can’t prevent against man-in-the-middle. I know this. But that’s a flaw in the web, and further limiting the web platform because of this seems idealistic instead of practical. It severely limits the usefulness of Service Workers to users, and requires significant developer investment to use. The aim of service workers is a front-end web developer who wants to develop for the web.
If HTTPS were already deployed everywhere this would be a solid idea. But it’s not, so I feel it’s not. In this particular usability-security trade-off, I feel usability wins. That’s my stance here.
Everyone commenting here should read #199 where the decision was made in the first place.
Secure-only is the right place to start. Once Service Workers are available in the wild and constrained to HTTPS, other groups will take that into account when deciding how easy to make HTTPS support. For example, Mozilla's likely to prioritize fixing their bug to allow multiple OWAs under a single origin.
After it has time to bake, if the decision to restrict SWs to secure transport proves to be a disaster for adoption, it'll get reconsidered. Personally, I don't expect that to happen.
My concern there is that Service Workers are many months away from usage in regular users browsers. What's the length of time we consider HTTPS-only Service Workers to have failed?
Once they're available and developers find it's hard to use them via HTTPS-only-origins, we are stuck waiting for browsers to implement an HTTP-only version, then, given Chrome and Firefox's releases cycles, many more months for it to land in regular users hands.
In the worst-case scenario, which I apparently see as more likely than others, that's a few releases of iOS where we don't have usable Service Workers and iOS has an NSLaunchRocketShip
API.
But: I've clearly said my piece. I'm not sure I have anything new to add here--I just felt it was important to comment somewhere that I feel HTTPS-only-origins are a big hurdle for easily-deployed web apps.
I have to agree with @tofumatt and @sole here. @tofumatt puts it best when he said:
In this particular usability-security trade-off, I feel usability wins.
In my view, limiting ServiceWorker to HTTPS only would be a major hit on the usability of this API. Unfortunately, I don't have the answer to @jakearchibald's question of how this would be implemented, but feel that this is an issue that we, as web developers, need to be able to mitigate ourselves rather than having the ServiceWorker API do it for us.
Most people I've heard feedback from about ServiceWorker have decried the HTTPS-only limitations of this spec, so not responding to this feedback is, in my opinion, setting ourselves up for a fall.
Once Service Workers are available in the wild and constrained to HTTPS, other groups will take that into account when deciding how easy to make HTTPS support. For example, Mozilla's likely to prioritize fixing their bug to allow multiple OWAs under a single origin.
After it has time to bake, if the decision to restrict SWs to secure transport proves to be a disaster for adoption, it'll get reconsidered. Personally, I don't expect that to happen.
These seem like very dangerous assumptions to make given that some vendors are hardly quick to implement these kind of changes. It doesn't seem to be right that we're relying on vendors/other groups to "make HTTPS support easy" when there are no guarantees that they will do that. If we get this wrong now, how long will we have to wait until we have an improved version everywhere?
So @yoavweiss initially made a proposal here http://lists.w3.org/Archives/Public/public-webappsec/2014May/0012.html about how ServiceWorker could be enabled over non-TLS connections. I replied with the following which I post here for completeness (and at the urging of @brucelawson):
I have a concern about this orthodoxy of always putting service worker apps under TLS. Since the STRINT workshop, and also in light of the coming move to http/2, I¹ve been talking to a lot of web developers about moving to https. I¹ve heard a lot of concerns with this, even from large, established web sites. Developers¹ concerns generally fall into the following categories:
- TLS itself is a pain to administer - the logistics of the certificates, installing them, making sure they remain valid, ensuring they cover all the needed domains, keeping up to date with best practice, etc..
- https sites require beefier hardware to serve
- https sites are more difficult to load balance
- serving over https makes it much more difficult to use third party content (scripts, images, videos, ad networks, whatever) in your webapp
A head of advertising for a major UK web site told me ³moving to https means we will lose money.²
I'm not saying these concerns aren't addressable in the long term, but I wonder, specifically looking at service worker, and considering that adopting service worker will already mean a big learning curve for web developers, whether enforcing TLS-only for this burgeoning technology is the right approach.
In this light, I think Yoav¹s proposal deserves some additional consideration.
We will not support non-secure connections. Closing.
Can be seen localhost as a special use case for testing purposes, and/or allow exceptions when there're problems using a certifica (for example self-signed ones)? El 19/05/2014 21:37, "Alex Russell" notifications@github.com escribió:
Closed #274 https://github.com/slightlyoff/ServiceWorker/issues/274.
— Reply to this email directly or view it on GitHubhttps://github.com/slightlyoff/ServiceWorker/issues/274#event-122611346 .
Can be seen localhost as a special use case for testing purposes, and/or allow exceptions when there're problems using a certifica (for example self-signed ones)?
The goal is arguably "secure origins", rather than strictly HTTPS.
In that regard, the Chromium Security team has the following policy on what constitutes "secure origins" - http://www.chromium.org/Home/chromium-security/security-faq#TOC-Which-origins-are-secure- - which includes localhost (by name and IP), wss://, https://, file://, and chrome-extension:// (since extensions are signed)
Self-signed certificates - and allowing their interaction with SW - is something that UAs will have to consider in light of whatever existing treatment and policies they apply. Chromium, for example, will not allow any resources to be cached when a user-bypassed certificate error has occurred. One would presume this policy would apply to Service Workers as well - prohibiting their installation or update.
That is very unfortunate. It is insufficient to point at a single company (startssl) selling individual certs to mitigate the significant barrier to entry this imposes on developers. Keep in mind that if you run more than one site (i.e., more than one app for example), you'll need to serve them not only from different origins but also from different dedicated IP addresses. It is inherent to the TLS protocol that you have to check the cert before getting HTTP headers into the server's hands, so you can't run more than one site on the same IP and cert.
I will respect the decision if it remains HTTPS-only, but know that you're building a much less impactful technology for developers. I would hope that, long-term, decisions like these will result in the market treating HTTPS as a commodity (so that together with IPv6, for instance, you pay a small amount to a provider and you get a turnkey HTTPS-capable server for your projects).
But that's a marathon to run, and I am not convinced that this particular issue is making us get to the finish line faster, or if it is in fact tying our shoelaces together.
It is inherent to the TLS protocol that you have to check the cert before getting HTTP headers into the server's hands, so you can't run more than one site on the same IP and cert.
This is not true. See http://tools.ietf.org/html/rfc6066#section-3 (TLS Server Name Indication). It is not an inherent requirement of TLS that you are limited to one cert/IP pair.
See also http://en.wikipedia.org/wiki/Server_Name_Indication
You can reasonably be assured that any UA implementing Service Workers already supports TLS SNI today.
Very well. I stand corrected, thank you. I suppose I'll have to try if I can use startssl for multiple domains on the same server w/SNI.
It doesn't deeply affect my central point though -- I am doubtful that the added barrier to entry will in the short term result in an uptick of SSL on the "offline-capable" Web, and would not be surprised to instead see limited adoption of serviceworkers instead.
I think you've misunderstood the goal here.
If allowed over HTTP, Service Workers opens significant security vulnerabilities. These vulnerabilities will affect not just the sites that opt-in to Service Workers, but ALL sites. Such large-impacting decisions MUST be secure by default.
Conceptually, one can imagine the negative reaction had if some new Web Platform feature allowed for uXSS. No matter how enticing that feature would be for authors, the risk it poses to all users is too great to ever support.
Likewise, with Service Workers, the requirement of HTTPS exists to mitigate the significant risks that are posed by the insecure web - in particular, a level of permanence and persistence that could extend well-beyond what one might 'reasonably' expect.
Unfortunately, such concerns are not academic - we see them regularly being exploited, for fun and profit. From bemused "script kiddies" running Firesheep, to ISPs wanting to inject and rewrite content on sites to leverage additional revenue for the ISP (by injecting ads or marketing their users' private information), to governments looking to persistently track "persons of interest".
TL;DR: secure connections are the future and they make it simpler to reason about Service Workers. They are better for users (which is enough reason to enforce the restriction) and developers (thanks to the painful, hard-to-work-with mitigations we don't have to add to the spec).
The long version: It's worse than even what @sleevi outlines: if we enable SW over HTTP we now have to add a series of mitigations, not only the hash checking, and those mitigations will all impose developer burdens. E.g., if we go with SW over HTTP, we need to start treating all non-200 responses as potentially being threats; perhaps indicating a bogus SW was previously installed and the SW should now be removed. This dramatically complicates multi-homed rollouts of new versions of software. This is just one example. There are many others.
Further, Service Workers are an integration point for a series of new persistent capabilities which also shouldn't be available without some assurance to users about the actors actually using them; e.g., Background Sync: https://github.com/slightlyoff/BackgroundSync
We'd end up with much SW usage being SSL-gated regardless, and in a world where SW's aren't gated on secure connections, we'd then have to explain which APIs are available in which mode. With SW's gated on secure connections we only need to gate one feature, not many. It's simpler to think about, which is good for everyone.
Just to pile on, FWIW - in the HTTP/2 world, we've gone back and forth on whether to require HTTPS to use the new protocol, etc. and there are good arguments on both sides (whatever @sleevi says :)
This is not that situation. SW is a hugely powerful mechanism - effectively, it's a fully scriptable proxy / cache baked right into your browser, and once an attacker has access to it, they can basically do anything.
So, personally I very much agree with the stance that @slightlyoff and @sleevi are taking here. This is not just HTTPS-boosterism.
Thanks for your info @sleevi, didn't know the requeriments of Chrome :-)
@sleevi , @slightlyoff et al, thank you for taking the time to explain. I am still not happy about the barrier to entry, conceptually, but I support that to enable the powerful technology that SW is, TLS-required is the price (in $$ and effort) we (as a webdev community) have to pay to get there. I certainly never meant to suggest users are better served by an insecure Web; instead it was (and is) the barrier to entry that worries me. Alas, I'd rather climb that barrier than have no SW at all.
A quick look through the spec and jakearchibald's cool blog post has me understand that Service Workers are planned to only cache resources loaded from HTTPS URLs by default. This worries me, as it means that any developer of a web app that uses them needs to buy an SSL cert (and likely run their own server or pay a hosting provider), even if they're running a web app that runs only client-side code.
I understand this can be overridden locally for development (which is awesome to see), but as the developer of apps like a client-side Foursquare app or an HTML5 podcasts app I'd love to see support for HTTP URLs. These apps are entirely client-side and simply load themselves into appcache, but I would like to be able to use service workers for said tasks in the future. Without HTTP support: I can't.
It's particularly important in the context of Open Web Apps. While a nice example in Jake's post was services like github.io offer HTTPS support, this is only true for user subdomains like sole.github.io -- not CNAME domains. This is important because Open Web Apps allow only one app to be installed per-origin, so developers of multiple apps need to use CNAMES to host multiple apps (not to mention the likelihood that they'd want their own domain).
Forcing users to have a domain-per-app already means they need at least one domain of their own, but forcing them to have an SSL cert per-app gets quite costly and hard to set up -- in addition to making excellent deployment platforms for web apps like GitHub Pages useless.
I'd really like to see HTTP support on by default. I think it limits the usefulness of Service Workers severely if not. I do understand the security repercussions, but keeping web apis locked down to appease security may also lead to them ever lacking parity with "native" APIs.