Closed anssiko closed 7 years ago
Also, there's a discussion going on in the /r/webvr sub-Reddit. Hopefully, we can consolidate everything into one place (preferably here).
Since Chrome 47 (Dec 2015), the navigator.getUserMedia
API can no longer be called from an insecure origin (e.g., http://example.com) but instead from only a secure-context origin (e.g., https://example.com/).
Besides the set-up cost incurred from wanting to use your own personal server (and potentially being careful about which name servers [and CAs] you trust to use) or shared server, yes, you are sacrificing peace of mind and principles (by choosing whom is in your circle of trust), it's clear that users benefit indisputably from better security, protection, and also peace of mind. Florian, Jonathan, James, et. al. have outlined faults of TLS, developer ergonomics.
For those of you who maintain your own servers, how are you ensuring that your infrastructure, third-party libs are foolproof and properly (and quickly patched)? IMVHO, it's a massive liability to not defer this to folks who do this for their customers for a living. Otherwise, we're looking at custom server maintenance, security, deployment, troubleshooting, logging, version control, file-management, tooling, etc.
FWIW, the discussion (especially from and within the Chromium and Firefox teams) has been an ongoing one for several years now.
The API landscape today is different than what it was in the '90s and '00s. As I would expect, because of the grave dangers of a successful MITM attack or packet sniffing, Service Workers, Geolocation, Push, Camera, Microphone, Bluetooth, etc. can totally own a non-HTTPS site. And that - which alternatives besides TLS do folks recommend? It's easy to think of HTTPS as Security Theatre on the Web, but we're comparing HTTP vs. HTTPS - not HTTPS on its own.
The policy browser vendors have committed to adopting (mostly in response to the threat model of Service Workers, as mentioned above) is that any "powerful" APIs introduced to the browser ought to be default restricted to the a secure origin on the Internet. (Again, this excludes http://localhost:*
, http://127.*.*.*:*
), etc.)
I understand that folks want to deploy to any server (free or otherwise). I also understand that folks don't like the idea of HTTPS being forced at all today (perhaps the timing or rationale does not seem substantiated to developers today).
I'm a web developer of 20 years, not a platform dev - and definitely not a corporate drone at a browser vendor ;) (Bear in mind the following is not necessarily respective of Mozilla Security's position; please be patient, as this discussion wrt WebVR hasn't been given much thought until this week. We are committed and listening.)
Here are a few recommendations of mine:
<iframe allowvr>
to allow parent windows to enable particular <iframe>
s to (automatically) enter/exit VR as they see fit (see filed issue #25).webvr
, once webvr
is added to the Permission Registry enum, we can start using this API.
Read some proposed sample (pseudo-)code I wrote here in this other issue.
(While the Permissions API is a relatively new API, it is in fact available in both Firefox 46 [April 2016] and Chrome 45 [September 2015]).Disruptive, disrespectful antics have no place here. If you want to engage in that conversation, I kindly ask that you restrain yourself a bit and channel your frustration into talking about solutions. Let me squash any fears some of you may have: We are all on the same team. The browser platform stewards of WebVR exist and working extremely hard to future-proof this platform in the most democratic, exciting, low-friction way possible. (Remember, this is something that native VR is just now starting to think about.)
All I ask is spare the conspiracy theories, personal attacks, and dystopian narratives.
Back to the actual "deployment" issues at hand…
If you're not using a free, secure hosting solution such as https://pages.github.com, https://surge.sh, https://cloudflare.com, https://www.squarespace.com/, https://www.dropbox.com/, which method are you using instead? And which deployment do you expect or hope Web(VR) developers to use to deploy web sites today?
As I mentioned in a few emails ago, if you are looking to dev with HTTPS and you want to test your WebVR scene/page by using its local address (e.g., https://10.0.1.6:9000
– not http://localhost:*
, http://127.*.*.*:*
, etc.), here's an easy way I do it using a self-signed cert with only one npm
script command in my package.json
(you could even automate this with Bash, ZSH, etc. too):
https://github.com/cvan/webvr-holodeck#local-development https://github.com/cvan/webvr-holodeck/blob/6581527/package.json#L7-L9
Excited to hear from what y'all have to share.
I'd love to throw in my two cents on this from the perspective of someone who guides new developers into emerging tech like VR — adding a requirement for HTTPS can be a major stumbling block that prevents developers from giving WebVR a go. While developer ease of use (and/or beginner laziness to some extent) is not really a reason to keep a system insecure, I don't think that forcing HTTPS is necessary for WebVR.
The navigator.getUserMedia
change has resulted in a lot of developers messaging me confused and believing that the web just couldn't actually fulfill the promises that native apps could do, when their beginner apps would result in black screens and such due to being hosted on non-HTTPS. That change, however, made sense to me from a security perspective as the data from navigator.getUserMedia
can be pretty sensitive. I was able to guide developers and help them get their work running but I worry about how many might have just given up and moved on.
WebVR, on the other hand, is not necessarily going to involve any sensitive data or functionality that'd require HTTPS? So this seems like an unnecessary stumbling block. Many developers start out with a basic http://localhost
set up, or a free host that does not provide HTTPS. It seems a bit unnecessary to require them to find a new server set up to tinker with HTTPS early on.
Personally, I'd be able to develop for WebVR with this limitation, as I've got Let's Encrypt running on my public facing server for Dev Diner and could potentially set up HTTPS on my localhost (thanks @cvan for your npm script, that could actually come incredibly handy in future even for non-WebVR builds!)
I'd really prefer to keep WebVR away from the HTTPS requirement. It'd be fine to have it as a recommended best practice, but I do not see the reasoning behind why HTTPS is a must-have for WebVR. With Chrome WebVR already requiring this... developers already will need to implement it. However, I hold out hope that maybe if it is not standard within WebVR's spec, that the Chrome team might change their mind ;)
My main question would be — what is the functionality within WebVR which is bringing about the concern and subsequent requirement for HTTPS? Knowing that might clarify the developer community position if there are clear consequences for not having HTTPS as a requirement.
My main question would be — what is the functionality within WebVR which is bringing about the concern and subsequent requirement for HTTPS?
Putting aside all the general reasoning about tampering and surveillance, here are two reasons why it should be only available over HTTPS:
Also, this has been said a bunch of times, but is worth reiterating: local development does not and will not require HTTPS. localhost
is marked as a secure origin by both Chrome and Firefox, and Chrome has a flag that can be used to mark an arbitrary origin as secure for development purposes.
(Updated after thinking about it all further... I'm not pro-HTTPS once more!)
Those were all pretty fair reasons. In that case, it made sense to require HTTPS for long-term safety and security. I saw the value in that... but does it need to be a hard and fast requirement?
Security-wise, we are likely to be headed towards a mostly HTTPS web (for good reasons), but should WebVR be forcing things along like that? Is it a requirement that is specific to VR — or something that should be a best practice standard that is defined elsewhere? We can still show people visibly that their connections are not secure in WebVR, without requiring every single instance of WebVR is HTTPS only. Especially useful if we clearly tell people that their VR headset sensors/cameras could be tracked or intercepted.
VR is going to be actively using sensors and cameras to detect your position and so on, that could potentially be vulnerable to snooping if vulnerabilities are found and taken advantage of. That could get scary, especially if down the track we're using onboard cameras that we wear all the time that switch to VR only temporarily (in that case, you could theoretically see everything someone was seeing). However, despite all that, if developers have instances where they know they do not need HTTPS for the use case they're building for, it makes more sense to leave it open to them.
If we go down the route of HTTPS only, we need to ensure people are taught how to deploy over HTTPS in a variety of ways within their WebVR learning.
Okay, I've edited my position after a lot more pondering... the more I think about this, the more it makes sense to leave this as an option rather than making it a requirement. We can still clearly show that it isn't secure so users have the option of switching to HTTPS or leaving the site.
Overall, the web is moving towards HTTPS and that's fantastic :) WebVR can travel along with the trend and move to HTTPS at the same rate — developers can choose when to adopt it and when it isn't required. That'd be ideal.
My only hesitation with not going HTTPS only is the potential dangers of open vulnerabilities (like those mentioned for WebGL). My level of knowledge here makes it harder to truly judge the ramifications. Would it be possible to set HTTPS as a requirement for specific implementations which are more troublesome? Or would they all have a similar level of vulnerability? (e.g. does connecting to a Vive's hardware introduce more vulnerabilities than a Google Cardboard style smartphone VR connection? Would it make less sense to require HTTPS on a smartphone's WebVR experience, while making lots of sense to require it for external VR headset connections?)
However, despite all that, if developers have instances where they know they do not need HTTPS for the use case they're building for, it makes more sense to leave it open to them.
Except it isn't the developers who are at risk -- it's their users. So it only makes sense to allow users to accept the risk of using WebVR over HTTP. By allowing a command line flag, Chrome allows power users -- the only kind of user who is even capable of evaluating that kind of risk -- to accept the risk if they want to. Yes, that means that very few people will have the knowledge and ability to use WebVR over plain HTTP, but that's proportionate to the dangers involved.
I understand it doesn't feel good to have things forced on you, and that learning new things is annoying, but I think people are seriously overstating how much of a barrier HTTPS will be to adoption of WebVR. If you want to work on something in development, http://localhost
or a free https://whatever.herokuapp.com
domain will do you just fine. It is difficult to imagine that getting a free or cheap TLS certificate is going to be the barrier that stops developers from moving into production.
Finally, consider the opportunity cost here. If WebVR allows HTTP out of the gate, it's going to be very difficult to reel that back in and force HTTPS later and risk breaking existing stuff. Requiring HTTPS for new features is a necessary way to avoid incurring more technical debt, so that the only legacy features browsers have to worry about corralling towards HTTPS are the ones that exist today. If the web is going to move to HTTPS by default, constraining new features is a practical necessity.
Please stop talking about Heroku, AWS and today's online services when considering standards. It is the same as trying to define HTTP by referring to geocities, Myspace and other long gone and forgotten services. WebVR will be around for 10 years, even Google may have disappeared by then, and TLS may no longer be used. The flaw driving this push towards HTTPS is at the network layer, and it needs to be managed in the same way mobile phones grant access to apps for camera, location, etc. Forcing HTTPS onto a "powerful" feature is piecemeal and doesn't achieve security as multiple files, eg images via WebGL, are still available over HTTP. This proposal also ignores the numerous large private networks which are completely secure at the network layer, and therefore do not need (or want) imposed security which prevents better approaches like monitoring network traffic for unapproved transfers.
This proposal is based on three old, fading, ideas: "the cloud" - dubious merit. "TLS" - easily compromised by governments. "browser security" - servers can be hacked. The entire exchange has been pro-TLS people pushing for what they want with no suggested alternatives, and they're going to implement it regardless of how many people object. The standard answers have been "it's free!" "use Heroku!" "LetsEncrypt!" - but all this ties the browser to services that are here today and possibly gone tomorrow.
Alternatives need to be offered, possible options need to be listed, choices need to be made as a community... not Google and Mozilla deciding and then the world obeying.
Forcing HTTPS onto a "powerful" feature is piecemeal and doesn't achieve security as multiple files, eg images via WebGL, are still available over HTTP.
That's a great example of why it's important to stop allowing more HTTP connections -- browsers are still trying to rein in mixed content, and are having to come up with all sort of novel approaches to make that process as graceful as possible (upgrade-insecure-requests, HSTS priming, CSP reporting...).
This proposal also ignores the numerous large private networks which are completely secure at the network layer, and therefore do not need (or want) imposed security which prevents better approaches like monitoring network traffic for unapproved transfers.
The idea of large private networks that are completely secure at the network layer -- relying on the secure perimeter -- is an outdated concept. Endpoints should behave as if the network is malicious, even private networks.
This proposal is based on three old, fading, ideas: "the cloud" - dubious merit. "TLS" - easily compromised by governments. "browser security" - servers can be hacked.
The cloud seems to be doing fine.
TLS, properly implemented, is not easily compromised by governments. Even China's Great Cannon relied on unencrypted traffic, and China tends to block HTTPS services they don't like (like Google), rather than spend the considerable political risk involved in compromising a CA to do it. Chrome and Firefox's implementation of key pinning lets site owners significantly raise the bar to compromising TLS -- but even without key pinning, compromising TLS tends to be visible and costly. In addition, government actors are not the only threats out there -- protecting users from local (e.g. coffee shop) attackers, and ISPs that are either malicious or just have business models poorly aligned with consumers, is also extremely important. Even backbone ISPs engage in traffic attacks on unencrypted traffic.
That servers can be hacked doesn't mean network security is unimportant. Hacking a server, like compromising a TLS connection, takes targeted effort. Attacking unencrypted network traffic can be done cheaply, undetectably, and in bulk. That's a corrosive threat to the web and to its users, and that's what browsers are responding to.
Alternatives need to be offered, possible options need to be listed, choices need to be made as a community... not Google and Mozilla deciding and then the world obeying.
Google and Mozilla both listen to users -- they operate far more openly than Microsoft or Apple do, and have real pre-decisional conversations with outside users on public forums -- but they don't decide on their features and choices by referendum.
Developers have a certain vantage point, and tend to prioritize the concerns that affect themselves, which is natural and fine. Browsers need to factor those in, but browsers' concerns are global, and they need to consider their users' safety too. Right now, user safety means establishing some sort of baseline of security at the network level. Since that wasn't happening naturally, it requires making decisions that could feel surprising and unnatural -- but this is temporary, and the ecosystem is adapting to make HTTPS as easy as it has to be to support having it as a baseline.
This has been resolved for a while. so closing. For the record, on Chrome's side (and I think other browser are doing similar things) we're adding a persistent overlay to the presented scene that indicates that the page is not secure. This doesn't prevent WebVR's use wholesale, but strongly encourages developers to use HTTPS for a better user experience.
Not sure where else to ask. I'm totally for this https requirement but how can I bypass for dev?
note: this is no longer true
For the record, on Chrome's side (and I think other browser are doing similar things) we're adding a persistent overlay to the presented scene that indicates that the page is not secure.
WebXR is fully blocked in both firefox and chrome if not https
I'm on laptop at a cafe trying to dev WebXR. I have an android device. I serve the page on my laptop and try to connect at http://192.168.1.123:8080 and of course the Android device says "no https, sucks to be you"
As a dev is there a way to turn off the check in the browser? AFAICT getting https to work is a huge pita. The fact that localhost works is useless, I'm not serving the page on the Android device. Setting up public certs sounds really painful. It requires me to buy a domain since letscrypt can only make certs for domains that it can verify I have control over. Do I really need to pay for a public domain just to do local dev? Worse, how would I get the android device to see mydomain.com as pointing to https://192.168.some.internet.cafes.ip.address. Every time I move cafes I'd have to update DNS records or edit /system/etc/hosts on my Android device which is a huge pain in the ass.
Otherwise I have to figure out how to make private certs and install them on the devices. That's also a huge pita. Is there a simple solution or is WebXR basically super hostile to develop for?
Note, I looked into ngrok but that has 2 issues. (1) it requires an internet connection, something I don't always have. (2) it requires data to make a round trip from my computer to ngrok's servers and back to my computer. For a small webpage that's probably fine but for a WebXR app that might download 10-100 meg of data that sucks, especially if I'm doing dev I probably have the browser set to turn the cache off which means every single time I run the page 200meg of data is sent over the net and I get to wait several seconds for it to download again. On a slow connection like at a coffee shop it's intolerable.
It seems like some other solutions are needed.
I usually use port forwarding via Chrome's USB remote inspector: https://developers.google.com/web/tools/chrome-devtools/remote-debugging . This also works without an Internet connection.
If you want to be untethered, you can use adb tcpip
to redirect the debugging connection (including port forwarding).
Search for the setting Insecure origins treated as secure in chrome://flags/
It provides a text box where you can specify origins that should be treated as secure for development/testing purposes.
Search for the setting Insecure origins treated as secure in chrome://flags/
I'm not sure how that helps. I tried adding http://192.168.1.43
which is the address of my laptop. I added it in Chrome on Android but that failed. Apparently I would need a domain name? Which means I either need to edit /system/etc/hosts on my android device or else setup a DNS server somewhere and point my android device to it? Or am I missing how this is supposed to work?
The port forward method @klausw posted worked.
Any known solutions for firefox?
I usually use a dev server that has https support and will generate a self-signed cert for me. Both webpack-dev-server and browser-sync have a simple command line flag for this. You can also use the selfsigned node package to generate a cert that will work with most other node.js servers.
Firefox on Android does not have native support for WebVR or WebXR.
Thanks everyone. I updated servez the app version and servez the command line version to use the selfsigned solution that @brianpeiris recommended. It seems to work for my needs.
@greggman the URL you add to chrome://flags/
needs to include the port number (if your URL has a port number, which in your example it does).
Here is a complete guide to running the WebXR samples yourself:
0) connect your phone and computer to the same Wi-Fi network
On your computer:
1) git clone git@github.com:immersive-web/webxr-samples.git
2) cd webxr-samples
3) python3 -m http.server
this will host your webxr-samples/
directory on port 8000
4) find your internal IP with hostname -I
on Linux or ipconfig getifaddr en0
on macOS, for example mine was 192.168.1.2
.
on your Android phone:
5) open 192.168.1.2:8000
in Chrome
6) Copy the URL http://192.168.1.2:8000/
7) Open chrome://flags
and search for "Insecure origins treated as secure" and paste the copied URL "http://192.168.1.2:8000/
" there
8) Press the blue "Relaunch" button in the bottom right (opening and closing Chrome from your list of apps doesn't seem work)
9) refresh the http://192.168.1.2:8000/ page and it should work
You can also then connect your phone over USB to your laptop to use remote debugging in Chrome to view the JS console.
Chrome WebVR will be made available only on secure origins, so we should consider making this normative in the WebVR spec, unless someone has concerns.
The Secure Contexts spec gives practical advise on how to guard sensitive APIs with checks against secure contexts.