moxie0 / Convergence

An agile, distributed, and secure alternative to the Certificate Authority system.
http://convergence.io
621 stars 108 forks source link

Convergence needs the ability to operate with "internal" SSL sites. #7

Open raylance opened 13 years ago

raylance commented 13 years ago

My server webmins are all using self-signed certificates. How do I enable the "I understand the risks / Add Exception" dance? It does not work as of now.

EDIT: "I've managed a workaround by setting up my own local notary, but this isn't ideal because I have to relax the restrictions quite a bit. I have to turn off anonymization (because my notary isn't accessible from other notaries) and I have to make due with "Require only one notary to agree".

It would be nice to be able to specify which notaries should be referenced for which domains, I would love to set up a notary which validated hosts from the context of my workplace (firewall policies and all)."

axtl commented 13 years ago

Not sure what you need. The idea of 'self-signed' is no longer important once you switch to Convergence. Make sure the error message doesn't in fact show up because Convergence says the page is not secure (check to see what the notaries reported)

Otherwise, perhaps submit steps to reproduce?

moxie0 commented 13 years ago

@totolici is correct, a certificate's signature should not matter to Convergence. However, the way things are currently implemented, your certificates do need to be correctly named. So if your self-signed certificate is for www.foo.com, with the CN or SAN field of your certificate needs to say www.foo.com

If you have any examples that you could share which reproduce the problem, it'd help us understand what you're running into.

dgcgh commented 13 years ago

I can't tell if it is the same issue that @rlance is seeing, but I have an undesirable behavior when accessing private network servers with self signed certs. That is, servers not accessible to the notary.

I get:

{servername} uses an invalid security certificate.

The certificate is only valid for Invalid Certificate

(Error code: ssl_error_bad_cert_domain)

and when I go to add an exception and look at the certificate I see the notary's certificate instead of the self signed one.

The CN for the self signed cert agrees with the hostname.

moxie0 commented 13 years ago

@dgc-wh, correct, you won't be able to access servers not visible by the notaries. Right now the only workaround is to single-click the Convergence icon, in order to toggle it off. Convergence also doesn't support adding certificate exceptions, since Convergence is having to locally MITM all outgoing traffic in order to manage authenticity.

dgcgh commented 13 years ago

I've managed a workaround by setting up my own local notary, but this isn't ideal because I have to relax the restrictions quite a bit. I have to turn off anonymization (because my notary isn't accessible from other notaries) and I have to make due with "Require only one notary to agree".

It would be nice to be able to specify which notaries should be referenced for which domains, I would love to set up a notary which validated hosts from the context of my workplace (firewall policies and all).

JackOfMostTrades commented 13 years ago

I've created a fork (https://github.com/melknin/Convergence) which includes a "cache manager" tab in the options. You can use this to manually add a trusted certificate to the cache, so if you have caching enabled this effectively allows you to whitelist internal sites.

nwp90 commented 13 years ago

The bad cert domain error will happen if you are accessing a local site relying on the DNS resolver to add your local domain to the hostname, e.g. using "https://myserver/foo" rather than "https://myserver.local.domain/foo". Type the full domain and it will work (assuming the notary can access the site).

cless commented 13 years ago

The client should probably recognize private ip ranges and handle them as a separate network entirely. If no notaries are available in that ip range it should fall back to plain old CA verification. I'm not sure if you have access to your own ip/netmask from a firefox addon though.

moxie0 commented 13 years ago

@melknin, I think the cache manager is great. The timestamps appear incorrectly for me, is that my bug or yours? Also, topic branches for these changes would make integrating them easier.

@cless, that sounds like an interesting possibility, but could be tricky. Using network perspective on a LAN, where all the notaries, the client, the server, and the potential attacker are all on the same LAN probably won't resist attacks very effectively. The "internal" notary would have to be using some other mechanism, possibly PKI or pinning, in order to validate the destinations.

I'm also not sure that we can just fall back to CA signatures when we see traffic to RFC1918 destinations, because then a local DNS attack could effectively turn off Convergence for the targets in question.

cless commented 13 years ago

@moxie0 You're right. The CA fallback problem can be solved by making it obvious to the user when a domain resolves to a local ip address but that opens a whole other can of worms related to user education. Additionally it means that there will be no authenticity on unfamiliar local networks (hotspots, etc) if we manage to get rid of the CA system entirely.

On the other hand most private networks that require SSL authenticity are probably companies with an IT department. They could set up their own database of all valid certs on the network or set up their own CA and use a notary that validates everything correctly without using the network perspective. I believe many are already doing that by distributing their own root into the browser. Assuming there are no insanely large private network this should not require any extraordinary effort.

I agree that the approach has a lot of problems though, and I know no real fix that would make everything work just without effort from users or local network admins.

JackOfMostTrades commented 13 years ago

@moxie0, the timestamps appeared incorrectly for me as well. When I was looking at the ctypes documentation earlier, I remember seeing mention of int64 types requiring special handling. Despite trying to follow the documentation, using sqlite3_bind_int64 was only saving the bottom 32 bits. I finally gave up and just used sqlite3_bind_text and cast the timestamp to a string. It looks like that change gets the timestamps to persist correctly. (Commit pushed in my fork).

nwp90 commented 13 years ago

@cless - no authenticity on unfamiliar networks may actually be the correct result. Can you think of an example of a service on the local network that you would want/should be able to trust in that sort of situation? I can't think of any for which you wouldn't be able to have e.g. prior knowledge of its certificate.

So either you trust the network operator or you don't. If you do, we just need some way to tell convergence. With a suitable warning...

ewanm89 commented 13 years ago

@nwp90 I'd say then that the internet as a whole is an unfamiliar network technically :P.

As for this situation, I don't think there is any clear solution, corporation's own CA to validate against can probably be trusted, and would be more how the CA system really should have been run from the start (still has logistical problems). But this doesn't solve self signed sites that are admin config systems and the like. I think @melknin's method which basically locally pins the cert manually is the best solution from a security standpoint, but not a usability standpoint. I wouldn't want to train staff in a corporation to pin only the corporation's certs once I've deployed some new system on the network?

nwp90 commented 13 years ago

@ewanm89 yes, but there are notaries on that network that you can choose to trust. Not so likely on a smaller unfamiliar network.

torynet commented 13 years ago

How much of a problem would accessing internal ssl sites on a small unfamiliar network be though? If the user had the option to override individually in those edge cases, that wouldn't be so bad would it? That's basically what happens with self signed sites now.

ewanm89 commented 13 years ago

I think the problem isn't the inherent insecurity in it, but the lack of education of users of when this is valid to add an exception. For example I teach my family to never add an exception for a self signed site just because it's easier.

cless commented 13 years ago

@nwp90: I can think of some situations where authenticity is useful on these unfamiliar networks. My ISP has wireless hotspots all over the country and you have to log into them before you can use them. These hotspots are small unfamiliar networks and they probably rely on the CA system currently. Without authenticity on these networks it would be easy to set up rogue hotspots and steal user credentials. I don't think this is an edge case, so it can't simply be ignored. As @ewanm89 said relying on the user to manually add exceptions is not a very good solution either. We all know what happens when you expect the user to get security right.

ghost commented 13 years ago

What if there were varying levels of trust for notaries? In a simple implementation, I could have one (or more) notaries which are used internally to respond for sites on a private network or within a corporation. Additionally, a separate group of notaries could be used for non private sites. While full use of the private notary could be problematic, explicitly whitelisting domains for that notary as opposed to certs would reduce the amount of user education necessary and provide an additional layer of security for the people concerned with their organizations spying on them.

nwp90 commented 13 years ago

@cless - yes, but there is no reason why you should not have prior knowledge of the signing certificate(s) used in that kind of situation - and so should use that rather than trusting either the current system or an arbitrary notary. Example - your ISP could run a notary which dealt only with those certificates, which you could set up your client to use in advance.

@botsdots - I don't think varying levels of trust is a good idea as such (too complex for users), but I would suggest that it be possible to configure the client to use certain notaries as only whitelists or only blacklists, before moving on to standard notaries that function as at present. That way, it's easy to blacklist known bad certs, and it's easy to deal with local networks and the sort of situation @cless described above.

Just to clarify - I'm not suggesting any change to the notary, just to the client. For a notary set up as a whitelist, "no" answers would be ignored but a single "yes" would be sufficient to trust a cert, and for a blacklist, "yes" answers would be ignored but a single "no" would be sufficient to distrust a cert and block access to a site (no user override possible short of removing the notary).

ghost commented 13 years ago

@nwp90 - I like that solution. Effectively that's what I was going for, you have different classes of trust (whitelist, blacklist, other), just a more straightforward implementation.

torynet commented 13 years ago

@ewanm89: I totally agree that not having to have users ever add manual exceptions would be ideal. That's a difficult task that I'm not sure anyone has figured out how to do yet.

It's a very worthy challenge that needs some serious thought. That said, you don't want the perfect to become the enemy of the good either. So, if you can find a solution that is approximately equivalent in manual exceptions to the current system but without the CA's, that's definitely a start and might be better than what we have now.

@cless: I definitely hear what you're saying. That's similar in scope to the captive portal issue (meaning large). It seems to me a very valid issue that if the user can't verify the cert on your isp's internal site, it would be all too easy to spoof the hotspot and it's portal and it's notary. You may trust the network operator but are you certain you're speaking to the network operator you think you are. Right? Hey, what do you know, it's about authenticity.

@nwp90: That sounds like an excellent start. In the end though, the user has to have an ultimate alternative. Not every network will have implemented the best solution. Right now it's shutting off Convergence but that's far from ideal.

nwp90 commented 13 years ago

Oh, should probably mention that whitelist notaries would probably need to have limited scope - not sure how you could work that though.

ewanm89 commented 13 years ago

The only thing I can think of is being able to have a priority table with domains -> list of notaries for that domain, so if domain is found in that table it uses those notaries to do the check, else it uses default notaries. But this is not very user friendly for the non geek to add a domain to the list or anything.

Shaparak commented 12 years ago

@moxie0 An example of a website that uses a self-signed certificate with a wrong name: edu.sharif.edu Without convergence, it is enough to add the exception; couldn't connect to it with convergence. This is of course a problem with the website but I wonder how many sites like this are out there.

ewanm89 commented 12 years ago

@Shaparak, we need to generate invalid certificates that firefox won't accept ever for convergence to work (for the true failiure case) to do this we generate a cert with an invalid Common Name field as one part of making sure it's invalid.

Now, I have a couple of routers with self signed certificates with invalid common name field (they are generated before configuring the router, and the router can get upstream IP from DHCP so IP cert wouldn't work. As a such, I can't use these with convergence either.

In most cases though, if the server administrator can't manage to create validly named SSL certificates they should probably be fired, but I'm not their manager.