glowing-bear / glowing-bear

A web client for WeeChat
https://www.glowing-bear.org
GNU General Public License v3.0
948 stars 177 forks source link

Please enable redirect to https and hsts #962

Open ehuggett opened 7 years ago

ehuggett commented 7 years ago

Hi,

It appears (www.)glowing-bear.org points to cloudflare and https is enabled, so it would be really great if insecure requests could be redirected to https and a HSTS (HTTP Strict Transport Security) policy was added as cloudflare support both.

(I don't use cloudflare, but the following might help with both) https://blog.cloudflare.com/how-to-make-your-site-https-only/ https://blog.cloudflare.com/enforce-web-policy-with-hypertext-strict-transport-security-hsts/

The first link also outlines how to ensure requests from cloudflare to (github? in this case) to retrieve the content to cache and serve are only done over https, which is also really important.

N.B. This is a change that CANNOT be (easily) undone as browsers will cache the HSTS policy until it expires and while it is valid will REFUSE to connect insecurely, silently upgrading all requests to https even if the user explicitly types http://

jornane commented 7 years ago

This will prevent users from connecting to unsecured WeeChat instances, only WeeChat with TLS will be supported then. This is because a webbrowser will refuse to connect to an unsecure websocket from a secured page.

ehuggett commented 7 years ago

My apologies for an ill-conceived request, preventing the use of an insecure relay was not intentional.

But I cannot honestly say I would object to it either given that relay access is effectively shell access (there's even a plug-in for it!) and there apparently being no safe way to retrieve glowing-bear over the internet and have it be able to connect to a unsecured relay, even if that relay is bound to localhost (ie injected javascript could install plugins, send a reverse shell etc).

On the assumption that my initial request cannot be accepted due to the issue with websockets, I would propose the following.

As Readme.md strongly encourages the use of an encrypted relay, would it be possible to use a separate subdomain for plain http? This would allow redirection from http to https, but would allow the user to opt-out of going "secure by default" by using unsecured.glowing-bear.org instead from that point on (or perhaps insecure/unsafe etc to reinforce the message).

Starting with a very low max-age for the HSTS policy would allow it to be reversed reasonably quickly if any further unintended consequences arise, a very cautious approach would be to increase it in small but regular intervals instead of all at once after this, I would suggest valid for initially only for a few minute(s) then hour(s),day(s), week(s) etc (a lot of work for whomever managed the cloudflare account perhaps)

Also ensure the preload directive is not present (initially).

It would obviously not be possible to use the IncludeSubdomains directive either if the chosen location for a http version was a subdomain of glowing-bear.org, in fact care must be taken to ensure it is never used while that version is required.

lorenzhs commented 7 years ago

Hi @ehuggett,

I very much agree with your sentiment, and I'm as bummed out about the current situation as you are. As noted, some people use Glowing Bear on their local networks, and not only is TLS unnecessary in one's home network, it's also a tremendous hassle to set up. That said, we should definitely push more people onto secure relays.

Your comment on injected scripts is a different attack. Active manipulation of http connections is much more involved than simply recording the password in a relay connection. That makes it a bit of a false equivalence. Of course it would be preferable to retrieve GB over a TLS connection, but our threat model is not the NSA as much as it is the dude two tables over at Starbucks.

We've thought about using a different subdomain before. I would suggest "local.glowing-bear.org" that only accepts IP (v4/v6) addresses but not hostnames. If someone puts in their server's IP then so be it, you can lead a horse to water but you can't make it drink.

HSTS can come at a later stage when we're sure that things work. Deploying https and HSTS at the same time sounds like a recipe for disaster.

ehuggett commented 7 years ago

[sorry, I lost track of time while writing this response. a little too in-depth perhaps]

Active manipulation of http connections is much more involved than simply recording the password in a relay connection. That makes it a bit of a false equivalence.

That’s not really fair? Two likely methods for observing traffic flows on home / public networks

1) Wireless sniffing: Assuming a WPA2-PSK WLAN and knowledge of the PSK (password) you need to place a wireless interface into monitor mode and start capturing packets and then either force clients to reconnect to the AP or wait for them to do so naturally, as each one does so you will be able to observe the session key that will be used to encrypt that clients (and only that clients) connection to the AP and after you have obtained this key it can be used to decrypt capture traffic.

2) ARP Poisoning: Assuming knowledge of the PSK connect to the AP (or plug a cable in). Once connected to the network, with simple graphical tools continuously send gratuitous ARP packets informing the [every host on the subnet] that [every other host on the subnet] is ‘at’ your MAC address. End result, every connection from any host to any other host (including internet traffic to/from the router) is redirected through your computer.

Method 1 can be completely passive but it can only observe the WLAN traffic

Method 2 can be used to capture traffic between any two hosts on a LAN/WLAN, normally only the router’s IP address is of interest. This method puts you in a perfect position to perform “Active manipulation of http connections” with other easy to use graphical tools... which brings us onto the next topic

our threat model is not the NSA as much as it is the dude two tables over at Starbucks.

You brought them up first 😄 (is there a variation of Godwin's law for this?), my suggestions are only suitable to counter the kind of threat from the level "the guy at starbucks" can present (threats from the local network) but stop short of the level where someone unauthorised to do so is capable of obtaining a certificate for glowing-bear.org from any widely trusted certificate authority.

The CA system is of course not perfect, but its the best we currently have, and while further policies such as HPKP can reduce those kinds of risks but they are also much "riskier" to use. A mistake at any point in time could make the entire domain totally useless until the cached policy expires if its not setup AND maintained very carefully (unlike HSTS which "only" forces you to use https / makes it useless if you require http on the "." or subdomains the policies covered).

If a GB user thinks they might reasonably require these kinds of risks to be mitigated then I suspect their "threat model" should not allow them to trust the server they are asking for the 'securer' connection to anyway (so they should host it themselves, or simply use the console via ssh etc).

HSTS is mainly an effective measure to prevent the user/developer/etc from making the mistake of specifying http when https is available. It would also stop my browser from offering the http url as a suggestion for auto-completion in the address bar when I type "glow" (I deleted the http url from the history of course, but it will probably happen to me again at least while I'm involved with this issue)

We've thought about using a different subdomain before. I would suggest "local.glowing-bear.org" that only accepts IP (v4/v6) addresses but not hostnames. If someone puts in their server's IP then so be it, you can lead a horse to water but you can't make it drink.

Sounds like a good way to strongly discourage using it that way, if that's what you want to do. It would be a painful transition for users to go directly to that from the current situation.

Does it it make sense to allow the "Encryption" option to be set by the user if GB is loaded over http? I would think would nearly always be an oversight/by mistake (they meant to load it over https) so despite it not doing any "technical harm" it might give them some false confidence.

HSTS can come at a later stage when we're sure that things work. Deploying https and HSTS at the same time sounds like a recipe for disaster.

Sure, I don't think its urgent (just a best practice). I would want to test it on a separate domain if I had not done used it before with cloudflare (which i haven't, just via webserver config files).

But it should be safe enough with an initial maxAge of 300 seconds, as thats the TTL of the DNS records for the GB domain (effectively that's an "already accepted" delay for these sorts of changes).

One last further suggestion, which I would actually suggest you don't respond to in public, would be to ensure that 2 factor authentication (such as TOTP or HTOP, ie using freeotp on android would be a great start) on any account which could potentially be (ab)used to change the content served by glowing-bear.org (that would be DNS/ghandi and hosting/github + cloudflare). It would be highly embarrassing, and rather disastrous for the affected users, if someone was able to get control of any of them (as they could change the behaviour of the hosted GB, ie submitting all hostnames/passwords to a 3rd party and then getting shells on all of them). You might also want to have a look at the combination of the WHOIS data and https://haveibeenpwned.com/

lorenzhs commented 7 years ago

Look, this is getting a bit off topic here. I'm aware of sniffing, ARP poisoning, etc, there's no need to discuss these things here. I'm trying to make the defaults as safe as reasonably possible without pushing people to hosting their own http version of GB that never receives patches. If that allows people to be stupid after disregarding advice, then so be it. Can't fix stupid. And in any case, the change should be incremental. We could add a note that "connecting to IP addresses may become unsupported at any time" and "you should really get a (sub-)domain and certificate, it's free and will make you much safer" or something. What I really want to avoid is people self-hosting and never updating because our version doesn't work for them any more all of a sudden.

The encryption checkbox makes sense for development, I regularly use GB from localhost to connect to my remote relay, and that's a totally valid use case. It is unnecessary in the hosted version, though. Maybe we should make a deploy branch so that releasing means merging master and then deploy into gh-pages. That could contain such fixes. We already have a few (such as the one informing users of the https version), but this would make it more transparent and easier to maintain.

Let's just say I'm a big fan of 2FA and use it (and enforce its use) wherever possible.

lorenzhs commented 7 years ago

Oh, and a sad stat: 57% of requests were TLS in the last month according to CloudFlare. That number has been slowly growing but it's still waaay too low.

Lastly, browsers are going to do the pushing to https really soon, too (marking any http page with inputs as insecure etc) so we won't be the only ones bugging users about it :) Might be taken more seriously if it's the browser vendor, too.

lorenzhs commented 7 years ago

See https://github.com/glowing-bear/glowing-bear/tree/deploy for some progress. That branch will be available for testing at https://latest.glowing-bear.org/deploy/ in a few minutes.

jornane commented 7 years ago

I think the error message the user gets when connecting to a hostname is useful. I also think that refusing to connect is too strict, and I think it's not user friendly that the message only appears after the user already entered their password.

Would you consider showing the message when the hostname field loses focus and its content are not acceptable, but allowing users to connect anyway? There are legitimate uses for an unencrypted relay with a hostname. Examples include .local addresses and public hostnames pointing to IPv6 address which are firewalled off from the rest of the internet.

lorenzhs commented 7 years ago

Good idea to show the message immediately (and next to the host field).

Maybe local.glowing-bear.org should start off with a warning and still connect as you suggested. But long-term, unencrypted relays will probably be a real pain. Who knows how long unencrypted websockets will be supported by browsers.

ehuggett commented 7 years ago

(context: if/when/after support for remote unencrypted relays is removed)

I can't see any way to support only dns/host names which resolve to local addresses, as i can't find a method to apply the whitelist to the result of the dns query the browser will make? (that doesn't involve asking a remote service) I also considered

which leaves us with

  1. Whitelist a few common hostnames, such as localhost localhost6, but generally require an IP address to be specified
  2. whitelist all hostnames, accepting that they might resolve to non-local addresses (having to edit the hosts file on every machine you want to use it on would be inconvenient anyway)
  3. Compile a whitlelist of common "local" TLDs such as .local, accepting that they might resolve to non-local addresses
  4. compile and maintain a blacklist of all global dns roots, accepting that non-blacklisted TLD's may still resolve to non-local addresses (and new global TLDs would become usable if the blacklist is not updated before they go live). https://data.iana.org/TLD/tlds-alpha-by-domain.txt (the XN-- prefixed domains can also be specified by the user with non-latin charatcers, ie XN--YGBI2AMMX == .فلسطين)
  5. Something else?

http://latest.glowing-bear.org does not appear to prevent the user from connecting to any IPv4 address specified in the common format, but does not accept uncommonly formatted IPv4 addresses

I'm trying to get familiar with the specifics of javascript at the moment, I thought i might as well try and do something useful in the process. Here is a gist that (hopefully) implements 1, 2 & 3

I think it would be easy to restrict IPv4/IPv6 addresses to private ranges (using the binary values) but public addresses can of course be used on local networks with IPv4 as is common with IPv6. Its less of an issue for IPv6 users as they are very likely to also have an alternative link-local address they can use (whereas in v4 its quite likely to be the only IP address for the host)

lorenzhs commented 7 years ago

Browser can't resolve hosts, that's why I suggested filtering on hostnames.

You're thinking way to complicated. We don't need to create a 100% solution. It just has to be good enough to push people onto TLS relays. If someone really wants to open an unencrypted relay, it's trivial to patch the code. We shouldn't aim for 100%. It would make the logic way too complex.

GB just passes the entered hostname to the browser. We don't do any validation on it. What happens to 127.0.00.001 is up to the browser, not us.

There's a working regex for IPv4 and IPv6 addresses in my commit above.

jornane commented 7 years ago

If browser vendors reject insecure websockets, that does impact Glowing Bear users, but all Glowing Bear can do is warn users about this. Even when insecure websockets are blocked, there may still be a way around it for users that really want it, for example through about:config. It would be unfortunate if Glowing Bear would put up another roadblock for these users. Don't make Glowing Bear do the browsers job.

I propose that we guess based on a simpel(er) regex whether the address is local or not, and if it's not local then the user is presented a dismissable warning. Since the warning can be dismissed, it's not a big issue if the warning is a false positive. It doesn't impede the user anyway.

lorenzhs commented 7 years ago

Security is not the browser's job, it's everyone's. I'm not proposing to prevent people from connecting to insecure relays at all costs, as I've noted above it's trivial to remove such a check anyway. What I'm suggesting is pushing people onto secure connections.

There's no excuse for using an insecure connection anymore. You can get a free subdomain and a certificate for it in a couple of minutes. It's not complicated.

ehuggett commented 7 years ago

I think we need to make it at least complicated enough not to reject valid input...

Any comment(s) on the current regex failing to match

My solution was the simplest approach i could think of that would hinder unencrypted connections to hosts outside the local network not specified with an IP address. It also makes no attempt to prevent external connections with a IPv4, IPv6, hostname or FQDN ("local", "localhost", "example", "invalid", "test") that resolves to a non-local address. This was my best interpretation of the intent from this issue?

The function i wrote could be simplified a lot further, but it was meant to be easy to read/modify. this should be equivalent for most input?

function whitelistCheck(host) {
    "use strict";
    const whitelist_tld = ["local", "localhost", "example", "invalid", "test"];
    if (host.indexOf(".") > -1) {
      const parts = host.split(".");
        if (parts.every(function (v) {return parseInt(v, 10) <= 255;})) {return true;}
        if (whitelist_tld.indexOf(parts[parts.length - 1]) == -1) {return false;}
    }
    return true;
}

off-topic but related: Should we include ssh (/putty) tunnelling instructions in the README for trying out GB without setting up a secure relay (too complicated? I've been using it for too long to know)

lorenzhs commented 7 years ago

If it's a dismissable warning, it doesn't need to perfect!

The test could be simplified using a regex, something like \.(local(host)?|example|invalid|test)$.