rubygems-trust / rubygems.org

The Ruby community's gem hosting service.
https://rubygems.org
MIT License
16 stars 2 forks source link

Volunteer-run (or commercially funded) CA for RubyGems #8

Open tarcieri opened 11 years ago

tarcieri commented 11 years ago

Proposal: organize a group of volunteers who will review requests for certificates for individual projects. Due diligence performed on reviewing each certificate application could resemble the Freenode Groups System.

Before I even start, the Freenode Groups System has had some troubles maintaining enough volunteers to continue operating. They provide a bit of a post-mortem about what went wrong, and do call into question whether such a system is sustainable using unpaid volunteers alone.

Perhaps a more realistic model would be to seek funding for this system in some form or another, e.g. from large companies (such as the one I work for) who depend on RubyGems for critical infrastructure. With some money, "volunteers" could be paid to work part-time or full-time to review projects and issue certificates.

Problems Solved

Here are the problems having a real CA system would solve which are not addressed by the other proposals:

WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rake (10.0.3) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing i18n (0.6.1) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing multi_json (1.5.0) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing activesupport (3.2.11) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing builder (3.0.4) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing activemodel (3.2.11) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing erubis (2.7.0) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing journey (1.0.4) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rack (1.4.4) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rack-cache (1.2) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rack-test (0.6.2) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing hike (1.2.1) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing tilt (1.3.3) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing sprockets (2.2.2) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing actionpack (3.2.11) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing mime-types (1.19) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing polyglot (0.3.3) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing treetop (1.4.12) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing mail (2.4.4) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing actionmailer (3.2.11) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing arel (3.0.2) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing tzinfo (0.3.35) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing activerecord (3.2.11) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing activeresource (3.2.11) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing bundler (1.2.3) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing coffee-script-source (1.4.0) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing execjs (1.4.0) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing coffee-script (2.2.0) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rack-ssl (1.3.3) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing json (1.7.6) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rdoc (3.12) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing thor (0.17.0) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing railties (3.2.11) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing coffee-rails (3.2.2) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing jquery-rails (2.2.0) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rails (3.2.11) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing sass (3.2.5) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing sass-rails (3.2.6) 
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing sqlite3 (1.3.7) with native extensions
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing uglifier (1.3.0)

I'd call this... obtrusive. I think any signature system should provide a mostly transparent user experience, and that is not what we're seeing here. Having a CA review each gem ahead of time centralizes this work at the CA level.

As we can see, the SSH-style model doesn't really "scale" to a system like RubyGems well. Where SSH prompts you at the time you connect to a single server, and perhaps you might actually do some due diligence verifying the key thumbprint (doubt it), RubyGems will present you with a wall of questions about whether you trust the certificates for each or any gem you ever install. In my opinion, this is a terrible user experience, and one that will offer no real security as users are unlikely to ascertain the validity of each of these key fingerprints.

All an attacker needs to do to bypass the "security" provided by this system is trick you, just once, into accepting a certificate used to sign a malicious gem.

Some have argued that a way to improve this situation is to have trusted companies (e.g. 37Signals) publish their keyrings somewhere, and end users can pick and choose which of these public keyrings they want to import. This has two problems: configuration over convention (users are expected to figure out where to get keyrings on their own, as opposed to shipping a root cert with RubyGems) and we punt on a real trust root in lieu of an ad hoc trust system. This is

Without a real trust root, an attacker could compromise the servers where 37Signals stores their keyring and add a malicious cert. Anyone who downloads 37Signals keyring after that would implicitly trust the malicious cert. Worse, since the system is ad hoc, there'd be no revocation model for this malicious cert, which leads us to our next problem...

CAs are run by humans. Humans make mistakes. Just as it's inevitable that RubyGems.org will get hacked again, a real CA is likely to approve a malicious certificate at some point. However, this is why pencils have erasers...

A real, trusted CA can publish a certificate revocation list, which should be updated prior to authenticating any gems (e.g. on gem install, or prior to bundling). Once the CRL has been updated, RubyGems can automatically validate whether any gems have been installed using tainted certificates.

RubyGems already has a built-in certificate system. Unfortunately this system lacks a trust root. A real CA could provide one. Since the RubyGems software is pure Ruby and depends on only the OpenSSL library for cryptography, leveraging RubyGems to do the certificate checking provides the least obtrusive, portable solution for validating gems.

Operation

I don't want to dive into the specifics of the operation or how volunteers coordinate. It could be as simple as a form you fill out and submit to an email list of volunteers. It could be a web site where people fill out a form. These implementation details are irrelevant.

The basic operation of the system is as follows:

  1. Developers who wish to publish a gem fill out a form: Developers would make a certificate signing request which includes information about the developers, about the project (e.g. title, description, etc), where its source code is hosted, etc. Developers can also include figments of their online identity, e.g. their Twitter, Facebook, blog, etc.
  2. Volunteers review the certificate signing requests: Volunteers then look over the certificate signing requests and validate them. This could include things like visiting the provided links to the project and the people involved and going "seems legit", integrating with a system like Github OAuth and ensuring people are who they claim they are, at least according to github, and/or asking people to commit a large random number to a specific file in the project to determine they control it.
  3. Certificates are issued to those who are deemed legit: Ideally each project would be double checked by at least two volunteers. Once the requisite number of volunteers have signed off on a project, it can be added to a list of projects to be signed by the root certificate. Ideally this process happens completely offline and is therefore immune to active attackers. There are many ways to keep the private key of the CA secure, such as using systems like Shamir's Secret Sharing and requiring a group of people to meet in person in order to accomplish any certificate signing. Or the private key can be trusted to a single individual. Regardless, I would recommend keeping it offline lest it be compromised.
  4. Mistakenly issued certificates are revoked: Let's say someone tricked our volunteers, providing a project which looks legit, but then immediately turning around and releasing malicious code. In this case, the certificate should be added to the CA's revocation list. This revocation list should be checked each time users update/install gems, so malicious certificates are useless once revoked. The revocation list should also be signed with the root certificate, so that an attacker cannot DoS the system by maliciously altering the revocation list and including legitimate certificates.

    Attack surface

  5. Root key compromise: if the private key of the root signing certificate is compromised, all is lost. The entire system must be re-keyed from scratch. Extreme care must be taken with the root key. As mentioned earlier, it should probably be kept offline, somewhere safe (e.g. literally in a safe), and possibly shared among multiple parties using a secret sharing scheme.
  6. Failure to detect malicious parties: a malicious person may still make it through the certificate signing process, via social engineering or other human failures of the process itself. In this case, a malicious person is issued a certificate which they then use to sign malicious gems. The countermeasures this system provides for such an occurrence come in the form of the revocation model. That said, the main threat is as follows...
  7. Legitimate certificate holders can still publish malicious gems: this system does not provide any review process for gems which are published, only the people publishing them. Once someone has obtained a certificate from the CA, they can use it to publish a malicious gem. The only defense the system has is to revoke their certificate once it's been determined that a particular certificate holder has used their certificate maliciously. In the meantime, the system is still vulnerable.
  8. Compromised revocation list: someone who can DoS the revocation list prevents system users from learning about malicious certificates. The best course of action here is debatable: you can not allow the user to install gems, effectively DoSing RubyGems, or you can allow them to install gems knowing the revocation list couldn't be checked, in which case they may be installing malicious gems. At the very least, it should be possible to prevent malicious manipulation of the revocation list (short of a root key compromise) by signing the CRL with the root certificate.

    Practical feasibility

This is a system that I don't think a single person could operate. It would require many people, dedicating a nontrivial amount of time, to review each certificate request, do their due diligence on verifying each request, and either granting or denying certificates.

I personally have doubts that there are enough people with long-term interest in RubyGems security who would be interested in reviewing certificate requests for the foreseeable future.

If the network of volunteers were to collapse, the result would be a large backlog of projects to review and no one to review them. Unless there are enough people to continue to review these requests, the system will break down.

There are approximately 50,000 gems to date. That's a lot of certificate requests, and the system wouldn't really be useful until every gem has been signed.

Is this system practical? It would be if there were a large number of volunteers, or if funding could be secured to pay people to work on this full time. Short of that, I personally have many doubts that such a system is sustainable.

havenwood commented 11 years ago

I volunteer to help vet certificate requests.

Wish I could volunteer assistance with CA, but alas I have no expertise.

cheald commented 11 years ago

I guess the question is, what problem do we want to solve? I'm under the impression that the primary problem to solve is to ensure gem authenticity (like the Google Play store; you know who the author is), not to guarantee against malicious gems (like the Apple App store; you know who the author is, and the project has been human-vetted). As long as gems are guaranteed to be authentic, then they are revocable through a CRL.

I hesitate to +1 a measure like this, because in the event that the volunteer base breaks down, it brings the whole system to a screeching halt, and adding a human auditing component increases friction in the system, which would likely cause many people to take a path that would allow them to avoid signing their gems, since it would get them to publication faster.

You can still get end-to-end security and no-SSH-style wall-of-accepts with an automated cert signing system like in my proposal, but the key differences are that Joe Badguy can more easily obtain a cert and use it to publish his gems. That is, presence of a signed gem on Rubygems.org does not verify that it's a good gem, just that it was in fact published by Joe Badguy. This does make it easily revocable though. There is no guarantee that Joe can't get a cert in a human-verified system, though, and I would argue that the sheer volume of esoteric gems already in circulation practically guarantees that this would happen due to human error, so I'm not sure there's a particularly huge gain there.

The other major concern that the private key would at some point need to live on an internet-connected machine, which, as you pointed out, is a potential problem because if the machine holding the private key is pwned, then you have to rekey from the top down. This is a doomsday scenario, but it's at least straightforward. I think that a dead-drop system with an extremely small attack surface is an acceptable compromise, personally, but I'll happily accept that it isn't acceptable to others. If you wanted a human component, you could have volunteers who shuttle a thumb drive between an internet-connected machine and a non-connected signing machine on a regular basis, but this again introduces friction and the obvious volunteer point-of-failure.

mattconnolly commented 11 years ago

I like this idea, in the main.

However, I wouldn't expect my signing certificate to be issued by the CA directly. In order to keep the root private key safe, the root could issue several issuing certificates (which could be revoked by the root) for issuing signing certificates. Each issuing certificate holder would be responsible for maintaining the revocation list for certificates that they issue.

What about multiple CAs? This would mitigate the all is lost problem if the private key was lost. Companies could also have their own CA and have gem signed by the developer at publishing time by multiple certificates. Although this is more work than managing a single CA. A gem would only be installed on a client system if its signature was valid for all signing certificates used. The user would need to accept trust for additional root CAs.

As far as DoS, the revocation lists could be mirrored, and of course only downloaded by https.

yorickpeterse commented 11 years ago

Regardless of whether there's going to be one CA or multiple ones (effectively creating a web of trust) it's going to be a lot of work in both cases. While I love Ruby and wouldn't hesitate to sign up for a group such as this I highly doubt any of us can run this unless we have at least 20 or so people that commit to the project for at least a day a week.

Thinking of it a distributed approach (where you have multiple CAs) would probably decrease the amount of work required by the "core" group. Instead of a group of 20-30 people having to go through thousands and thousands of Gems they now only have to bother with sub groups (37signals, Github, etc). In turn this means there's less control over who gets signed and who doesn't. The root group won't know about something being signed 5 layers down the chain until something goes wrong.

In both cases I think it's best if we create a very small proof of concept before we decide what route to take. We can sit here and discuss all the various aspects for weeks but I think we'll get a much better picture if we actually write some code. Start with one CA, 3 developers (people that write Gems), a few users (people that install Gems) and one attacker. See how things work out when an attacker gains access to a developer's credentials (or the root credentials), etc.

If everybody is OK with it I'd like to propose the date for this to be February 9th (Saturday) or the 10th. This gives people some time to think and prepare before we start playing with "My Little Certificate Authority".

tarcieri commented 11 years ago

@YorickPeterse:

While I love Ruby and wouldn't hesitate to sign up for a group such as this I highly doubt any of us can run this unless we have at least 20 or so people that commit to the project for at least a day a week.

+1. It would require a large group of volunteers or commercial funding to pay people to work on this part-time/full-time.

I think a lot of the review process can be automated by asking people to authenticate against multiple factors (Github, RubyGems, Twitter, etc) which figure into the overall identity verification process. However, I also worry that too much automation makes the system more susceptible to attack.

Finding the happy medium between quick approval and due diligence on authentication will be hard.

tarcieri commented 11 years ago

@cheald:

I'm under the impression that the primary problem to solve is to ensure gem authenticity (like the Google Play store; you know who the author is), not to guarantee against malicious gems

This is exactly what this system would do. Someone, once approved by this system, could use their certificate to publish a malicious gem, so this system would not prevent people who are legitimate certificate holders from publishing malicious gems. Defense against malicious gems would come in the form of revoking certificates for malicious gem publishers retroactively.

So yes, the goal of this system would be end-to-end gem authenticity, even in the event of (another) RubyGems.org compromise. It can not, in and of itself, prevent legitimate certificate holders from publishing malicious gems.

Perhaps I should add that to the attack surface section. (Edit: added #3 under Attack Surface covering malicious gems)

jstr commented 11 years ago

I think we should leave the vetting of gem publishers out of this proposal (i.e. part 2 of the Operation section).

  1. This will be very labour intensive
  2. It will be difficult to implement. - How will we vet new publishers? If someone doesn't have an established public developer persona will they be able to publish? If not, why not, and who are we to judge?
  3. The inevitable false-positives will lead to members of the community becoming disgruntled
  4. Malicious gem publishers will easily be able fool the system by creating developer profiles that look legit
  5. The community will probably reject being vetted by an arbitrary group of people before being able to distribute source code through a community system like RubyGems.org

I also think it's a bit of red-herring because new gems from new publishers are unlikely to present much of a security hazard at any scale. It's not likely they'll get much of an install base without their code being viewed by others in the community.

In saying that, I'm all for a certificate authority based approach. But I think it would be in our best interests to limit the scope to being able to prove where a gem came from (its author) and that it hasn't been altered.

nyarly commented 11 years ago

One point of confusion: are certificates issued per gem, or per release?

I would expect even with a fully-staffed CA corps, we'd see turnarounds on a signature of ~24 hours. I remember how long it took to start a new Rubyforge project, and they weren't doing much beyond an anti-spam check (that I know of). Also, witness the ongoing frustration with the Apple App store, which seems to be doing a similar amount of review.

It seems like the promise being made by the proposed CA is that only benign gems get signed. If that's the assertion being made, then the only way to do that is to re-sign every new version. And that would mean ~24 hours to fix typos...

I can see addressing this using a chain of trust, i.e. my organization erects a local CA that defers back to a higher level and so on to the Rubygems root, so that e.g. we can sign our local gems for trust within the organization until they're accepted at the root. (I say chain because a loop would admit a sort of human-mediated sybil attack: "Gee, many of our child CAs trust this gem, it must be okay")

pietro commented 11 years ago

http://www.cacert.org is a volunteer distributed CA. We could either use it or a similar infrastructure. All their code is open source. In order to be a CACert assurer you need to reach 100 assurance points and pass the Assurer Challenge. We could change the X assurance points requirement to known/trusted by the rubygems.org volunteers.

havenwood commented 11 years ago

@nyarly Certificates issued per-release would be a lot more work than per-gem. Even per-gem seems extremely taxing, unless there is an actual funded staff of people working on it. Would be a challenge just to vet certificates per-author on a volunteer basis.

Typically it seems that each author has many gems and each gem has many releases. I don't know the averages on gems-per-author but I'm guessing well over 1.0?

nyarly commented 11 years ago

@havenwood, that's my primary problem with this proposal (after the problem of protecting the root key): even a fully-staffed team would be hard pressed to do the work required. And if there's a backlog, the whole thing breaks down.

IMO a WoT approach (while it has other issues) can emulate a CA (the simple approach is: new deployers trust a centralized key) with the fallback of the rest of the web if the CA falls behind or gets compromised.

yfeldblum commented 11 years ago

How do you prevent literal denial-of-service by the vetting crew? Suppose you're writing a gem that competes with one of their favorites or with one of their own gems, and they hold a grudge. Or suppose you're not well-liked among the vetting crew.

tarcieri commented 11 years ago

@yfeldblum I can't say that won't happen, but it would be very silly for a CA to deny certs based on personal grudges

tarcieri commented 11 years ago

@nyarly no, a WoT cannot "emulate" a central trust authority. Without a central trust authority no keys in the WoT are held in higher regard than others, so it's possible to game the system via compromising keyrings or via Sybil attacks

nyarly commented 11 years ago

@tarcieri Emulation: by convention, all participants sign the root key when they join. The root key signs no other keys but does verify gems.

Conversely, the only thing that makes the root key the root key is the general convention that it's trusted.

The distinction being that in a CA system, no one can gainsay the root, whereas in a WoT that can happen very easily. The benefits of one or the other are debatable - e.g. sybil vs. root key compromise.

tarcieri commented 11 years ago

@nyarly a CA provides a real trust model. We trust the CA by convention, and all authority is centralized there.

A WoT is confused: there's no clear trust relationship between participants, and no real way to authenticate keyrings, since you're using keyrings to authenticate other keyrings. What makes an actual 37Signals keyring different from a Sybil 37Signals keyring, especially when there can be other Sybil keyrings that claim the Sybil 37Signals keyring is authentic.

Let's be clear: WoT is punting on a real trust model, and expects to replace it with ad hoc relationships that can easily be gamed. Without a clear trust root there is confusion about whom to trust, and IMO confusion creates insecurity.

matt-glover commented 11 years ago

Let's be clear: WoT is punting on a real trust model, and expects to replace it with ad hoc relationships that can easily be gamed.

To be fair it is a trust model driven by end-user choice. The end user decides who is in their core trust group initially. If they make poor choices then they will end up trusting bad actors. In the CA model the end user selects to trust a CA. The CA they select may be better at completely managing who to trust than the individual is when working in tandem with others they immediately trust.

Certainly in a WoT case where a user delegates trust poorly things can quickly devolve into ad-hoc relationships. That does not mean it is punting on a real trust model, it means it is difficult to safely grow a web of trust.

N.B. This assumes we are using a layman's definition of trust and not using a technical definition of "real trust model" that has not been explicitly defined in this thread, such as computational trust.

pietro commented 11 years ago

Doing a cert per developer could be mostly automated as it would only verify tha the developer owns the email used to create the rubygems.org account. We can keep the cert validity period short, 4 or 6 months. We can add more verification steps (ID, etc.) as the number of volunteers, donations to rubygems.org allows.

tarcieri commented 11 years ago

@matt-glover when I say "real trust model" I quite simply mean it has an authority, a neck to wring, and someone specifically tasked with doing some due diligence to ensure the security of the system. This is quite different from a pseudo-authority model where we might try to cross-correlate keys among multiple published keyrings, never mind the fact that if another vulnerability like the recent YAML one were to happen, someone could simply hack into the most popular pseudo-authorities who are serving keyrings and update them with a malicious one.

With a central authority, the cert could ship with rubygems and subsequent rubygems upgrades could be validated with it. Anyone curious if their copy of the cert was compromised could check with multiple people who have rubygems installed and ensure they have the same key fingerprint.

All that said, if you want a technical term to be used in lieu of "real trust model", how about sybilproof? Attackers who can compromise public keyring servers can concoct as many fake identities as they want and they will all cross-correlate. The web of trust will appear to be intact when it is in fact compromised. This is because there's no one person from whom which you can divine any truth.

Compare to a CA, where to accomplish the same thing you would have to compromise the CA's private key. Unlike public web servers (run by people who probably don't care much about security), the CA's private key should ideally be kept offline (like, in a safe) or split among multiple parties who must come together to do signing.

@pietro incrementally expanding the verification process might be a more practical approach than starting with a more comprehensive one right away, but it carries with it the problem that there will be poorly-vetted entities in the system already, so the actual value of doing a more diligent verification later is decreased. The real value a CA like this brings to the table is the scarcity of certificates and the increased difficulty of tricking the CA into issuing ones to malicious parties (as opposed to a WoT, where an attacker can generate unbounded malicious sybils)

This would also necessitate putting the private key online, making it susceptible to active attackers (as opposed to passive ones who might try to social engineer the CA). In large part this defeats the purpose of the type of system I'm proposing.

That said, I think the type of system you're describing might be covered in some existing proposals? If not, I'd definitely suggest making that proposal as a more practical (albeit less secure) alternative to mine.

Geal commented 11 years ago

Let's dive a bit in the operational part, with certificate issuance and CRL publication. Keeping the private key offline is the best idea, but can be quite impractical for day to day operations. Shamir's secret sharing is nice, but not really usable for distributed teams, unless you provide people with a common server where they can enter their part of the key. And then, the whole scheme is as secure as that server. Last but not least, keeping the private key in a HSM, a hardened VM could be nice to limit the access to the signing system, but there are possible attacks there too.

The WoT model is not incompatible with keeping private keys offline. If you have a set of organizations (ha, notaries again) publishing a signed list of gems, I don't see how we could be in the case of

public web servers (run by people who probably don't care much about security)

Also, as @nyarly said in #10, a WoT doesn't necessarily mean trust in the PGP sense. There is no possible sybil attack if the set of people/organizations you delegate your trust too is very limited.

The biggest issue of a CA is that it introduces a single point of failure. If the CA fails to do its work (for various reasons: lost key, too many CSRs, volunteer exhaustion, malicious admins, compromised admins, distributions replacing the CA with their own, etc), there's no way to recover.

That said, I largely prefer this proposal over #3, because it uses peopleware instead of an automated system. I can handle and fix errors. That means the possibility to revoke and reissue a certificate in the case of a compromise or a lost private key, or transferring the ownership of a project to a new developer if needed. How would you handle developers wanting to revoke a signature they made on a gem (in the case of a mistake, CI compromise, etc)? There should be an aggregation of signature revocations along with the CRL.

yorickpeterse commented 11 years ago

Having read through the comments one thing I notice that really scares me is that people seem to put too much trust in a single authority that ensures the safety of Gems and their signatures. Not only because putting all trust in a single organization is a recipe for disaster but also because it's a single point of failure.

If the CA gets hacked everything can be compromised. One might say "Keep the private key safe!" but we should know better than to expect for it to actually be kept safe. People make mistakes and that's fine, however mistakes regarding private keys will wreak havoc on the system. Simply put, I do not expect anyone (including myself) to properly keep it safe.

Another downside of the whole private key management issue is that it puts effort on a single person. Since you need the private key to sign things it would mean that ultimately this one person is responsible for it. This will be a full time job (a very boring one too) and I don't see anybody doing that any time soon.

Also looking at the various comments I think we're doing a great job at discussing things, though at the same time I feel we're focusing too much on the exact steps required in order to run a CA, keep everything secure and so on before even having tried to see if our solutions work out code wise.

What I personally suggest is to ditch the whole idea of a CA, at least as a core requirement in order for somebody's Gem to be accepted. Instead developers sign their own certificates but distribute them via RubyGems so that people can still very easily verify them. Yes, self signed certificates are "less" secure but it saves a group of volunteers a lot of work.

Another thing that people seem to misunderstand (maybe that's just the way I see it though) is that the primary aim of the feature we're discussing is to validate the state of a Gem (whether or not it has been tampered) instead of validating of the author is really who he/she says he/she is.

For those reason I'd propose that we focus on improving the current signing system to allow the following instead of spending too much time messing around with setting up CAs:

Once these issues have been tackled we can always mess around with a CA. If these will be around I'd prefer to see them more like SSL CAs: multiple organizations that can sign developer certificates. This doesn't put all the pressure on a single organization, nor does it introduce a single point of failure since in the end the end developers are still responsible for things.

Note: in case my message reads a bit confusing just ask. I've been trying to write this comment for the day now but kept getting interrupted and as a result may have lost track of things along the line.

yorickpeterse commented 11 years ago

To clarify, the image attached more or less shows what I have in mind. The bold text indicates a CA, the arrows indicate who signed who's certificates. [1]

Basically it would be a Web of Trust based system with maybe a few CAs that can sign their developer's certificates. The latter would make it easier to install Gems from these authors as they'll be considered "trusted developers". The certificates for these people could be installed/updated by running something like gem install rubygems-keyring.

x509_web_of_trust

[1]: I'm not sure if this is possible with X509, feel free to correct me on this if this is the case.

Geal commented 11 years ago

It is possible with X509, but a bit hackish :wink:

yorickpeterse commented 11 years ago

Is there another way to more or less emulate this in a way that's less hackish? I'm personally very, very uncomfortable with a single CA being responsible for everything. I'd rather see RubyGems support multiple ways of doing it (e.g. using before_install hooks of some sort) than a single authority having total control over who gets to push Gems and who doesn't.

yfeldblum commented 11 years ago

+1 @YorickPeterse.

Are the rubygems-trust volunteers going to have stricter practices than DigiNotar? Are they going to be better-trained and often-trained?

A PKI run by volunteers removes one class of problems but adds another class of problems.

PKI may work very well for an organization when done within that organization. E.g., setting up openvpn with the server trusting 10,000 client certificates by their common signer. But it has problems when it is imposed as a centralized security solution onto an inherently distributed network.

Just looking at the problems with the CA system as it relates to SSL websites over the Internet, the multiple recent attacks on the less-reputable CAs, the likely collusion between government-linked CAs with repressive regimes across the world, and the security researchers who are trying to find solutions (such as @moxie0's http://convergence.io and google's cert-pinning) to the CA problem, we can see that there is something wrong with putting all our trust eggs into a single CA basket.

Note that I am not suggesting a specific alternative here, such as PGP's web of trust. I am suggesting that the solution with the most support here, the single volunteer-run CA, is problematic, and that we should be looking at the full range of alternative solutions.

nyarly commented 11 years ago

@YorickPeterse - that's basically what I had in mind when I said "a WoT emulating a CA". I'm likewise uncomfortable with the CA solution.

I also want to bring up a flaw in "we distribute the CA cert with Rubygems" - OS distros include ruby and rubygems. So the Rubygems team isn't the sole source for the CA cert. It's not insurmountable, but any solution that distributes any certs needs a way to verify the certs automatically. I don't think you can on the one hand say "everyday users will be fooled by a sybil cluster" and also "everyday users will know how to verify the CA certs in their rubygems." Because they'll turn to the person next to them (with the same OS) and compare keys, maybe?

nyarly commented 11 years ago

Oh, last thing: regarding a Rubygems CA and a "neck to wring." If Verisign lost control of their root key, we imagine that maybe we could sue them. If @tarcieri loses control of the root key, we should... come find him and punch him? I mean really, how are you going to back up that responsibility.

Conversely, the volunteers in this CA are volunteering for what kind of liability in the case of a root compromise? Facepunchings? Lawsuits? Hands up who wants to risk destitution for the benefit of the rest of the Ruby crowd.

tarcieri commented 11 years ago

In a purely volunteer setup, I would expect them to shrug off all legal responsibility. It's hard to talk about what would happen in event of a root key disclosure (from a purely technical perspective, you'd rekey and start over from scratch) since that's sort of the nuclear holocaust of running a CA.

I vote for facepunches ;)

nyarly commented 11 years ago

That's kind of the root (no pun intended) of my problem with a CA based system: we all have to buy into a system where the failure mode is a nuclear holocaust. WoT failures affect everyone who was tricked by the attacker - granted the implementation should try to make that sort of trickery difficult, but the person harmed by a bad security decision is the person who makes it, whereas in a centralized system, those entities are very different.

Proposing a CA system where the acting authority bears no real liability (in this case "a proxy for harm") makes that problem much worse, in my opinion.

yfeldblum commented 11 years ago

Facepunching someone for dropping the ball in his volunteer-CA non-required duties-in-name-only may not work.

So I wouldn't bank on facepunching as the solution.

matt-glover commented 11 years ago

A scaled-down version would be more practical to maintain on the CA side. Build a limited fully signed repo in addition to the public repo that holds everything else. Get a group that does this verification just for the X most popular libraries by total downloads or some other usage metric (probably including maintainer willingness to put up with the CA process).

With some find-grained configuration options in the rubygems client you could then require verification for gems available in the fully signed repo.

Not really sure if the (still large) effort is worth it for the limited improvement in security though. Probably depends a lot on how many of the most widely used libraries a CA could keep up with. At best a limited CA is probably a partial solution to mix in with some other proposal(s).

tarcieri commented 11 years ago

@nyarly yes, another way of phrasing it is WoT punts on the problem and writes it off as an end user concern. No one else in the system can make mistakes because you're ultimately the arbiter of your own trust relationships.

That said, such a system isn't particularly usable by beginners or even the non-extremely-diligent. A CA centralizes that diligence in one place and lets everyone else reap the CA's expertise. If the CA does a good job the service is useful. If not, well...

nyarly commented 11 years ago

And I guess my primary point is that a system built on a WoT trust could have an initial mode where the default "trusted cert" was a root CA. Any user would be able to update that list. And if you designate "trusted to trust" keys, then part of a verification would be to check with keyservers for additional "trusted to sign keys" - presumably we'd need something along those lines to let volunteers work the CA in the first place (I can't imagine your suggesting ~20 volunteers working directly with the root key all the time.)

Built that way, any user can take their own security in their hands and start establishing their own trusts. The best tooling could do at that point would be to warn them of the dangers they're taking on (or advise them to contact Tony if they wanted a lecture on sybil attacks).

But the gain is: if there's a meltdown of the root keys, some users would already have alternatives. It'd be possible to have a multiple-root system, so a single compromise wouldn't be such a nuclear event. Folks who want or need to be on the bleeding edge might be able to sign gems and use them immediately (with commitant risks there entailed.)

And I don't know about anyone else, but frankly I want to be responsible for my own security more than I want to ship it out to someone else. I certainly want to have the choice to do so.

yorickpeterse commented 11 years ago

I reckon we (at least for now) implement the following: developers can generate certificate details (e.g. gem cert generate or something like that) and have them signed automatically by RubyGems. The public key would then be distributed via RubyGems as well. The signing process would require an Email verification link to be clicked in order for it to be completed.

The next time somebody were to install a Gem from this author RubyGems will check if the Gem is signed and if so if the signature is valid. Since the public key is distributed via RubyGems there shouldn't be any manual work required unless the details of the Gem are invalid (e.g. they've been tampered with).

This approach only guarantees the integrity of Gems, it does not validate whether or not an author is legit. Based on the amount of people using RubyGems and the people that actually maintain it the only way of doing this without paying 20-30 people for a full time job is to use a Web of Trust model, and this just doesn't work with X509. This however isn't a huge issue since the primary purpose of the proposed changes is not to verify that "yorickpeterse" is really "yorickpeterse" but that the Gem is what it says it is.

I'd like to at least make a start on this this weekend. Since it will take a while for a feature this big to be implemented it's best if we start as early as possible instead of trying to decide what kind of crazy distributed CA model we want to go with.

mattconnolly commented 11 years ago

I think it's fine to get started on gem signing and verification. As you say, at least you can verify the gem hasn't been modified since the author published it.

The trust rules and decision of whether or not to trust the authenticity of the author's certificate is something that can still be developed....

cheald commented 11 years ago

@YorickPeterse I've already locally prototyped the cert signing concept, as well as integrated it with the existing Rubygems cert chain to get my local install to trust root-signed gems (for a very local value of "root" :)). It's extremely simple conceptually; the big work will be in deciding what to implement it with in order to diversify the project infrastructure in order to protect it against a single class of attack being used to compromise multiple pieces of the platform.

The big steps necessary in my view are:

  1. Get Rubygems' existing cert generation to accept more information about the developer. We could even just have it generate CSRs.
  2. Make Rubygems' existing cert generation require a passphrase for private keys by default

I think that in order for this to actually be verifiable, you do have to have some kind of knowledge of the history of signatures on a project, because otherwise an attacker can generate a cert, have rubygems sign it, upload his backdoored and validly-signed code, and nobody is the wiser because the signature still checks out. This history is the really critical piece, IMO, since it is what establishes the chain of trust not just back to the root, but across multiple releases in a project.

I'm travelling this week, but I'd love to collaborate on getting a proof of concept up and running this weekend.

nyarly commented 11 years ago

@yorickpeterse - not sure what you mean "WoT doesn't work with X509" http://tools.ietf.org/html/rfc4158#page-10 There is a challenge in determining chains of trust efficiently, though. And the current rubygems security policies assume that chain can be determined at gem push time. (I think, for a number of reasons, that model is broken.)

tarcieri commented 11 years ago

@YorickPeterse your diagram has a problem. This problem (Zooko's triangle):

Celluloid

(Again, if you haven't already, please read my blog post on this matter)

Who is Github? Who is 37Signals? Who is RubyGems? Per Zooko's triangle, in absence of a central authority, they're numbers. How do we know we have the right numbers?

I can answer that question for this proposal: the "right numbers" are shipped directly with RubyGems, and compromising the RubyGems distribution itself, upon initial install, is the only attack surface for this system in that regard. Once we have the right numbers, we can authenticate all RubyGems upgrades with the right numbers.

Can you explain how we get the right numbers for multiple semi-trusted sources?

Geal commented 11 years ago

I think you are making quite a shortcut there. The "memorable" part pf Zooko's triangle doesn't mean that ypou won't be able to remember some specific trust points.

That doesn't mean that anyone would be able to impersonate Github, 37signals or Rubygems, because they already profit from a large recognition. In a notary system (and even in a WoT model), anyone attempting to do a sybil attack would be easily found out.

Getting the right "numbers", or simpler, "names" for the multiple trusted sources would be as easy as distributing them through rubygems (by Linux distributions) or by downloading them from websites like Github's, which we already trust enough to host this whole discussion without tampering with it.

yorickpeterse commented 11 years ago

@tarcieri As mentioned for the time being I feel that we should wait for the specific requirements the RubyGems people have (as mentioned in IRC). I personally also don't see how a central CA would solve anything in terms of credibility, but since I'm more of a WoT guy that's not surprising.

tarcieri commented 11 years ago

@Geal strongly disagree with your opinion of "memorable" in the context of Zooko's triangle (also, FWIW, I actually know the guy, and if you really want clarification I can ask him directly)

What Zooko means by memorable is "Github", "37Signals", or "RubyGems" (i.e. human-meaningful names) versus non-memorable: "txga6qsakmupaibzbout62qpsk647qwdn22fo73wte52pzgvvsya", "s2tceoarr7f7kblwqezfn43doacgrljayi4a6vz5gjc5mq4v5yva", "fvh5bth64dlx3ut2hxcozt4hhfpa4xo7cs6lvv3mhowzu2apc3da" (i.e. Zooko-style Base32 encoding ;)

You're arguing about remembering keyrings. I'm talking about what keyrings you trust in the first place, and how you derive that trust relationship.

Are you specifically tasking 37signals, Github, and RubyGems with securely distributing their keyrings? If so, how does that work, and how do you make sure they're authentic? Must users pick and choose which sources to get their initial keyrings from, all configuration over convention style, and if so, how do they get them securely?

For this proposal, everything is authenticated against the CA's public key, which would be hardcoded directly into the RubyGems distribution. Again, the attack surface is compromising an initial RubyGems install.

The CA would specifically be tasked with security of their private key, which would be their #1 priority. Again, ideally it's kept offline somewhere secure where it can't be stolen, digitally or physically. For the truly paranoid there's Shamir for distributing the key among multiple parties.

Geal commented 11 years ago

@tarcieri Yup, I strongly disagree with what I wrote too, that's why I edited it right after writing it, but it seems github doesn't send the edited version by email.

And to clarify my thoughts on the subject and on Zooko's triangle: with a public key based system, we are already in the "secure, global, non-memorable". The non-memorable identifiers you are talking about are the keys. The "Rubygems" and "Github" names are just there to create a petname system.

We are not in the case of a naming system here. End-user do not care about who wrote what gem. The only important thing is the signature.

About key distribution, I'll answer in #16.

About CA handling, I agree that the CA root should be kept offline, but then, you need an intermediate CA for day to day operations (ie creating certs for developers and publishing revocations). Shamir's scheme is totally impractical for distributed teams, because it would require people to enter their key part in some internet accessible server. That totally changes its trust model.

pietro commented 11 years ago

Going back to the volunteer CA. I was suggesting that we verify that who signed at rubygems.org with the email 'foo@example.com' owns that email because that's how we currently "identify" people. Verification of VCS repository ownership, government issues photo ID, etc can be added later on if deemed necessary.

pietro commented 11 years ago

BTW, rubygems already distribute the website certs in lib/rubygems/ssl_certs.

postmodern commented 11 years ago

Sorry for coming late into this discussion. Given that a number of high-profile commercial organizations have been compromised and their code-signing certificates used to sign malware (ex: Adobe and Bit9). Additionally, a number of regional CAs have also been compromised, and used to create fraudulent certificates for *.google.com, etc. How would we protect the CA and developer's signing certs? I don't feel having an isolated CA server would be enough, additional hardening would need to be done (custom kernel with all non-essential drivers disabled? GRSecurity RBAC + PaX?). Developer signing certs would also password protection (or some other piece of information) or maybe encourage developers to use Smartcards?

matt-glover commented 11 years ago

Security breaches are inevitable. If the type of attackers you referenced are interested in a ruby gem CA they will almost certainly get in. Focusing on increasingly complex systems and processes makes this solution less feasible to build and maintain. It is probably more valuable to focus on detection and recovery strategies to mitigate those risks.

cheald commented 11 years ago

"Security breaches are inevitable" is defeatist. If we assume nothing can be secured, why are we even here?

We can make such a system reasonably secure by reducing the attack surface to the high priority targets (cert signing servers) to be as small and tightly controlled as possible, and can mitigate the fallout of a successful attack by practicing principle of least responsibility. The Adobe and Bit9 breaches were carried out because of massive attack surfaces that were not properly controlled; there are no large organizations to control here, no teams of people that need to have signing certificates, nobody is going to be running a web browser with an outdated Java plugin on the machines that have the signing certs here.

Regional CAs have not been compromised recently. The only full CA compromise in recent history was DigiNotar; don't confuse the recent problems with CAs issuing bad intermediate certs with an actual breach of the CA itself. That is an entirely different problem and is wholly unrelated to the issue at hand here since we aren't talking about the need for any kind of intermediate CAs. The problem with the SSL CA system is its breadth - an issue which is not even a little bit of a concern here.

Developer keys should be secured in the same way that any personal private key is secured. Password protect it, put it under restrictive permissions, and follow good computing practices. No amount of planning can save stupid from itself.

If we aren't going to trust that it's possible to actually build practically secure systems, then we should just all pack up and go home now. Maybe it's just my naivete, but I don't think that things are anywhere near that bleak.

tarcieri commented 11 years ago

@matt-glover due diligence is hard. let's go shopping... for potentially compromised keyrings on potentially compromised public servers

By the way, that's the entire point of this system. RubyGems.org can be completely compromised and the CA will still detect invalid certificates (although someone can DoS the revocation list)

postmodern commented 11 years ago

Due diligence will raise the bar for attackers. Access-Control (RBAC) and Mitigation (PaX) will raise the bar further; although it's not a silver bullet against all potential vulnerabilities. Detection will reduce the amount of time an attacker can operate against us with detectable actions.

I do not agree with the common logic that you cannot prevent users from being stupid, so why do anything at all. We can surely encourage users not to do stupid things by writing security guides, give warnings for insecure file permissions, ensure that generated private keys are password protected, etc.

@cheald Off the top of my head, RA (an affiliate of Comodo) also had a server, that performed verification of certs, compromised. My point was that CAs have to retain a secret while providing a service that relies on that secret. Being well-funded helps, but that alone will not prevent you from being compromised.

pietro commented 11 years ago

https://cacert.org did an audit a few years ago to try to get included in the firefox ca list. It failed but the process was documented and is accessible: https://wiki.cacert.org/Audit Words form the auditor: http://www.iang.org/audit/ The criteria they used: http://rossde.com/CA_review/