Proposal: organize a group of volunteers who will review requests for certificates for individual projects. Due diligence performed on reviewing each certificate application could resemble the Freenode Groups System.
Before I even start, the Freenode Groups System has had some troubles maintaining enough volunteers to continue operating. They provide a bit of a post-mortem about what went wrong, and do call into question whether such a system is sustainable using unpaid volunteers alone.
Perhaps a more realistic model would be to seek funding for this system in some form or another, e.g. from large companies (such as the one I work for) who depend on RubyGems for critical infrastructure. With some money, "volunteers" could be paid to work part-time or full-time to review projects and issue certificates.
Problems Solved
Here are the problems having a real CA system would solve which are not addressed by the other proposals:
End-to-end integrity of gems with least authority: in a system like this, RubyGems.org is not trusted at all to maintain the integrity of gems. RubyGems.org can be completely compromised, and the system will still automatically detect whether gems have been maliciously modified.
Users do not need to review the key fingerprints of individual gems: In systems like the SSH-style approach, when a user bundles for the first time (on any given machine, unless they're copying their keyrings around) they will be prompted again and again to verify the thumbprints of various gems. Imagine bundling Rails for the first time... a UX strawman:
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rake (10.0.3)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing i18n (0.6.1)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing multi_json (1.5.0)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing activesupport (3.2.11)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing builder (3.0.4)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing activemodel (3.2.11)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing erubis (2.7.0)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing journey (1.0.4)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rack (1.4.4)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rack-cache (1.2)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rack-test (0.6.2)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing hike (1.2.1)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing tilt (1.3.3)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing sprockets (2.2.2)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing actionpack (3.2.11)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing mime-types (1.19)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing polyglot (0.3.3)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing treetop (1.4.12)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing mail (2.4.4)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing actionmailer (3.2.11)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing arel (3.0.2)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing tzinfo (0.3.35)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing activerecord (3.2.11)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing activeresource (3.2.11)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing bundler (1.2.3)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing coffee-script-source (1.4.0)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing execjs (1.4.0)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing coffee-script (2.2.0)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rack-ssl (1.3.3)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing json (1.7.6)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rdoc (3.12)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing thor (0.17.0)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing railties (3.2.11)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing coffee-rails (3.2.2)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing jquery-rails (2.2.0)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing rails (3.2.11)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing sass (3.2.5)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing sass-rails (3.2.6)
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing sqlite3 (1.3.7) with native extensions
WARNING: Key fingerprint (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) for "X" cannot be verified. Do you want to (i)nstall this certificate or (a)bort? [a] i
Installing uglifier (1.3.0)
I'd call this... obtrusive. I think any signature system should provide a mostly transparent user experience, and that is not what we're seeing here. Having a CA review each gem ahead of time centralizes this work at the CA level.
As we can see, the SSH-style model doesn't really "scale" to a system like RubyGems well. Where SSH prompts you at the time you connect to a single server, and perhaps you might actually do some due diligence verifying the key thumbprint (doubt it), RubyGems will present you with a wall of questions about whether you trust the certificates for each or any gem you ever install. In my opinion, this is a terrible user experience, and one that will offer no real security as users are unlikely to ascertain the validity of each of these key fingerprints.
All an attacker needs to do to bypass the "security" provided by this system is trick you, just once, into accepting a certificate used to sign a malicious gem.
Some have argued that a way to improve this situation is to have trusted companies (e.g. 37Signals) publish their keyrings somewhere, and end users can pick and choose which of these public keyrings they want to import. This has two problems: configuration over convention (users are expected to figure out where to get keyrings on their own, as opposed to shipping a root cert with RubyGems) and we punt on a real trust root in lieu of an ad hoc trust system. This is
Without a real trust root, an attacker could compromise the servers where 37Signals stores their keyring and add a malicious cert. Anyone who downloads 37Signals keyring after that would implicitly trust the malicious cert. Worse, since the system is ad hoc, there'd be no revocation model for this malicious cert, which leads us to our next problem...
A real revocation model in the event a malicious cert is mistakenly issued
CAs are run by humans. Humans make mistakes. Just as it's inevitable that RubyGems.org will get hacked again, a real CA is likely to approve a malicious certificate at some point. However, this is why pencils have erasers...
A real, trusted CA can publish a certificate revocation list, which should be updated prior to authenticating any gems (e.g. on gem install, or prior to bundling). Once the CRL has been updated, RubyGems can automatically validate whether any gems have been installed using tainted certificates.
Portable with few changes to the RubyGems software (if any)
RubyGems already has a built-in certificate system. Unfortunately this system lacks a trust root. A real CA could provide one. Since the RubyGems software is pure Ruby and depends on only the OpenSSL library for cryptography, leveraging RubyGems to do the certificate checking provides the least obtrusive, portable solution for validating gems.
Operation
I don't want to dive into the specifics of the operation or how volunteers coordinate. It could be as simple as a form you fill out and submit to an email list of volunteers. It could be a web site where people fill out a form. These implementation details are irrelevant.
The basic operation of the system is as follows:
Developers who wish to publish a gem fill out a form: Developers would make a certificate signing request which includes information about the developers, about the project (e.g. title, description, etc), where its source code is hosted, etc. Developers can also include figments of their online identity, e.g. their Twitter, Facebook, blog, etc.
Volunteers review the certificate signing requests: Volunteers then look over the certificate signing requests and validate them. This could include things like visiting the provided links to the project and the people involved and going "seems legit", integrating with a system like Github OAuth and ensuring people are who they claim they are, at least according to github, and/or asking people to commit a large random number to a specific file in the project to determine they control it.
Certificates are issued to those who are deemed legit: Ideally each project would be double checked by at least two volunteers. Once the requisite number of volunteers have signed off on a project, it can be added to a list of projects to be signed by the root certificate. Ideally this process happens completely offline and is therefore immune to active attackers. There are many ways to keep the private key of the CA secure, such as using systems like Shamir's Secret Sharing and requiring a group of people to meet in person in order to accomplish any certificate signing. Or the private key can be trusted to a single individual. Regardless, I would recommend keeping it offline lest it be compromised.
Mistakenly issued certificates are revoked: Let's say someone tricked our volunteers, providing a project which looks legit, but then immediately turning around and releasing malicious code. In this case, the certificate should be added to the CA's revocation list. This revocation list should be checked each time users update/install gems, so malicious certificates are useless once revoked. The revocation list should also be signed with the root certificate, so that an attacker cannot DoS the system by maliciously altering the revocation list and including legitimate certificates.
Attack surface
Root key compromise: if the private key of the root signing certificate is compromised, all is lost. The entire system must be re-keyed from scratch. Extreme care must be taken with the root key. As mentioned earlier, it should probably be kept offline, somewhere safe (e.g. literally in a safe), and possibly shared among multiple parties using a secret sharing scheme.
Failure to detect malicious parties: a malicious person may still make it through the certificate signing process, via social engineering or other human failures of the process itself. In this case, a malicious person is issued a certificate which they then use to sign malicious gems. The countermeasures this system provides for such an occurrence come in the form of the revocation model. That said, the main threat is as follows...
Legitimate certificate holders can still publish malicious gems: this system does not provide any review process for gems which are published, only the people publishing them. Once someone has obtained a certificate from the CA, they can use it to publish a malicious gem. The only defense the system has is to revoke their certificate once it's been determined that a particular certificate holder has used their certificate maliciously. In the meantime, the system is still vulnerable.
Compromised revocation list: someone who can DoS the revocation list prevents system users from learning about malicious certificates. The best course of action here is debatable: you can not allow the user to install gems, effectively DoSing RubyGems, or you can allow them to install gems knowing the revocation list couldn't be checked, in which case they may be installing malicious gems. At the very least, it should be possible to prevent malicious manipulation of the revocation list (short of a root key compromise) by signing the CRL with the root certificate.
Practical feasibility
This is a system that I don't think a single person could operate. It would require many people, dedicating a nontrivial amount of time, to review each certificate request, do their due diligence on verifying each request, and either granting or denying certificates.
I personally have doubts that there are enough people with long-term interest in RubyGems security who would be interested in reviewing certificate requests for the foreseeable future.
If the network of volunteers were to collapse, the result would be a large backlog of projects to review and no one to review them. Unless there are enough people to continue to review these requests, the system will break down.
There are approximately 50,000 gems to date. That's a lot of certificate requests, and the system wouldn't really be useful until every gem has been signed.
Is this system practical? It would be if there were a large number of volunteers, or if funding could be secured to pay people to work on this full time. Short of that, I personally have many doubts that such a system is sustainable.
Proposal: organize a group of volunteers who will review requests for certificates for individual projects. Due diligence performed on reviewing each certificate application could resemble the Freenode Groups System.
Before I even start, the Freenode Groups System has had some troubles maintaining enough volunteers to continue operating. They provide a bit of a post-mortem about what went wrong, and do call into question whether such a system is sustainable using unpaid volunteers alone.
Perhaps a more realistic model would be to seek funding for this system in some form or another, e.g. from large companies (such as the one I work for) who depend on RubyGems for critical infrastructure. With some money, "volunteers" could be paid to work part-time or full-time to review projects and issue certificates.
Problems Solved
Here are the problems having a real CA system would solve which are not addressed by the other proposals:
I'd call this... obtrusive. I think any signature system should provide a mostly transparent user experience, and that is not what we're seeing here. Having a CA review each gem ahead of time centralizes this work at the CA level.
As we can see, the SSH-style model doesn't really "scale" to a system like RubyGems well. Where SSH prompts you at the time you connect to a single server, and perhaps you might actually do some due diligence verifying the key thumbprint (doubt it), RubyGems will present you with a wall of questions about whether you trust the certificates for each or any gem you ever install. In my opinion, this is a terrible user experience, and one that will offer no real security as users are unlikely to ascertain the validity of each of these key fingerprints.
All an attacker needs to do to bypass the "security" provided by this system is trick you, just once, into accepting a certificate used to sign a malicious gem.
Some have argued that a way to improve this situation is to have trusted companies (e.g. 37Signals) publish their keyrings somewhere, and end users can pick and choose which of these public keyrings they want to import. This has two problems: configuration over convention (users are expected to figure out where to get keyrings on their own, as opposed to shipping a root cert with RubyGems) and we punt on a real trust root in lieu of an ad hoc trust system. This is
Without a real trust root, an attacker could compromise the servers where 37Signals stores their keyring and add a malicious cert. Anyone who downloads 37Signals keyring after that would implicitly trust the malicious cert. Worse, since the system is ad hoc, there'd be no revocation model for this malicious cert, which leads us to our next problem...
CAs are run by humans. Humans make mistakes. Just as it's inevitable that RubyGems.org will get hacked again, a real CA is likely to approve a malicious certificate at some point. However, this is why pencils have erasers...
A real, trusted CA can publish a certificate revocation list, which should be updated prior to authenticating any gems (e.g. on gem install, or prior to bundling). Once the CRL has been updated, RubyGems can automatically validate whether any gems have been installed using tainted certificates.
RubyGems already has a built-in certificate system. Unfortunately this system lacks a trust root. A real CA could provide one. Since the RubyGems software is pure Ruby and depends on only the OpenSSL library for cryptography, leveraging RubyGems to do the certificate checking provides the least obtrusive, portable solution for validating gems.
Operation
I don't want to dive into the specifics of the operation or how volunteers coordinate. It could be as simple as a form you fill out and submit to an email list of volunteers. It could be a web site where people fill out a form. These implementation details are irrelevant.
The basic operation of the system is as follows:
Attack surface
Practical feasibility
This is a system that I don't think a single person could operate. It would require many people, dedicating a nontrivial amount of time, to review each certificate request, do their due diligence on verifying each request, and either granting or denying certificates.
I personally have doubts that there are enough people with long-term interest in RubyGems security who would be interested in reviewing certificate requests for the foreseeable future.
If the network of volunteers were to collapse, the result would be a large backlog of projects to review and no one to review them. Unless there are enough people to continue to review these requests, the system will break down.
There are approximately 50,000 gems to date. That's a lot of certificate requests, and the system wouldn't really be useful until every gem has been signed.
Is this system practical? It would be if there were a large number of volunteers, or if funding could be secured to pay people to work on this full time. Short of that, I personally have many doubts that such a system is sustainable.