Open dtauerbach opened 11 years ago
@Hainish yes, additionally the current security design of SecureDrop does not provide enough security when there are many receivers/journos, so it's important to change the way PGP encryption is done. Think for example the GlobaLeaks-based deployment like PubLeaks (http://www.publeaks.nl) with 28 media partners and +50 journos. You cannot share a single private PGP key to each of them, because the security risks involved in sharing a single PGP private key among all of that persons is very high.
Additionally by moving to a fully javascript application it's possible to leverage CORS support, by distributing the application from trusted-place-A, having the application interacting with the leak-site-B. That way the code can be delivered by a different third party, including the encryption keys of the journos making the leak-site zero-knowledge also in case of active backdooring . That's the path IMHO we should move forward.
Instead of developing a Javascript client, imho we should merge with GlobaLeaks by leveraging the existing GLClient (http://github.com/globaleaks/GLCLient) and implementing it's REST APIs http://docs.globaleaks.apiary.io on SecureDrop . That way the opensource whistleblowing community effort will be valued rather than reinventing the wheel :)
Numerous possible implementations were suggested in the course of the above discussion. I would categorize them as follows:
The primary concerns we are trying to balance here are security, usability, and plausible deniability.
Developing a native client to do encryption is great for security, but bad for usability (another program to download and configure) and plausible deniability (only a source would likely have the "Securedrop source client" on their machine).
A host-based Javascript solution is great for usability and plausible deniability, but the security is controversial. Without the Web Crypto API, there are many serious unresolved questions - such as XSS stealing your private keys, side channels, auditability of host-served JS, the inability to protect or securely erase memory... Even if the Web Crypto API were implemented tomorrow, the auditability question would remain.
A browser addon is potentially great for usability (it can do encryption and key management transparently) and security (can use native crypto, is auditable, and is well-separated from content). If it were included in the browser by default, then the plausible deniability problem would be reduced to a user having Tor/the Tor Browser Bundle installed, which we require anyway.
I discussed this with Mike Perry, the lead developer of the Tor Browser Bundle, and am happy to say that he was supportive of the idea of including a Securedrop client encryption addon in the TBB by default. He had several concerns which would need to be addressed, but it is possible for us to develop an addon that could do encryption of submissions, and possibly decryption of replies, transparently on the client. I think that this is the best of our available options, and we should start actively developing this, targeted for a major milestone (maybe 1.0).
Mike's concerns are as follows:
There's a lot of great prior work in this vein (Enigmail, Cryptocat, and others) that we can learn from in order to get started. We will also be able to start developing targeted at the new TBB scheduled to be released in early December, which is based on Firefox 24 and has window.crypto.getRandomValues
, the Jetpack API for addons, and lots of other useful improvements.
Thoughts?
@garrettr At GlobaLeaks we evaluated and discussed deeply about the topic of "browser addons" but we found no specific advantages in making our GLClient as a plugin (self-contained JS application interacting with GLBackend) for several reasons:
For the reasons previously described i think that:
So i think that the approach should be to improve Mailvelope software, agree with Thomas on extending it to provide some specific API to use PGP encryption implemented into the plugin, to be called from Javascript interface provided by the backend platform. Then integrate Mailvelope into TBB, to extend TBB security feature with PGP encryption client-side.
As Mailvelope was mentioned I want to provide some comments on this.
@garrettr
A browser addon is potentially great for usability (it can do encryption and key management transparently) and security >(can use native crypto, is auditable, and is well-separated from content). If it were included in the browser by default, >then the plausible deniability problem would be reduced to a user having Tor/the Tor Browser Bundle installed, which we >require anyway.
Native crypto sounds like you still have a dependency to an external program like GPG. This was the FireGPG approach. The Mailvelope add-on is based on OpenPGP.js which means the crypto is also done in JS but you don't have the vulnerabilities of a hosted solution.
- The addon should not be visible by default, and should not affect Tor Browser Bundle users who are not interested in using Securedrop. We could make it so the addon is only enabled when a user navigates to a Securedrop installation's hidden service
Mailvelope could be configured like that. For a URL match patter like [*.onion] a JS object would be injected into the side that provides an API for encryption / decryption. But I think the question is if there is an interest in integrating general PGP services to TBB. If so, then Mailvelope is an option. For the Securedrop use case a small wrapper add-on around OpenPGP.js could be sufficient.
- The addon should not significantly bloat the TBB
For Mailvelope: ~ 1MB
- Of course, the addon will have to be carefully written and audited for security vulnerabilities.
First audit of Mailvelope is available here: http://cure53.de/pentest-report_mailvelope.pdf A second one just starts now. Third one is planned for the coming months.
We will also be able to start developing targeted at the new TBB scheduled to be released in early December, which is based on Firefox 24 and has window.crypto.getRandomValues, the Jetpack API for addons
There are currently performance problems when running OpenPGP.js as a Jetpack module: https://bugzilla.mozilla.org/show_bug.cgi?id=916464 According to Mozilla this is (hopefully) fixed with Firefox 27 which is the planned official release of the Mailvelope add-on for Firefox. But for a simple use case you can work around this issue by running OpenPGP.js in a page-worker.
@fpietrosanti
The backend can always break the encryption by providing keys decided by an attacker. The "effective" security of the key management in the encryption process of browser-addon vs. host-based javascript sounds like to be equal, thus the security level of the encryption process itself.
I'm not sure if the SecureDrop scenario allows to verify keys. But if this is possible you can verify the keys once and then store them in the add-on. In the next session the backend provides only the key IDs for the encryption.
So i think that the approach should be to improve Mailvelope software, agree with Thomas on extending it to provide some specific API to use PGP encryption implemented into the plugin, to be called from Javascript interface provided by the backend platform
I think that's a good idea. There is already the watchlist configuration of Mailvelope: an option could be added define match patterns for URLs where Mailvelope provides this API to the website.
@fpietrosanti
The root of trust for key management always sit in the backend
The "effective" security of the key management in the encryption process of browser-addon vs. host-based javascript sounds like to be equal
Nope. The browser addon is auditable, while host-based Javascript is (effectively) not. This enables a number of design choices that can improve security. For example:
@toberndo
Native crypto sounds like you still have a dependency to an external program like GPG.
That is one approach. OpenPGP.js is another. Emscripten-compiled GPG (or libgcrypt, with a thin wrapper for the very limited subset of functionality we require) is yet another. It sounds like you have a lot of expertise in this area and we could learn a lot from you!
But I think the question is if there is an interest in integrating general PGP services to TBB. If so, then Mailvelope is an option. For the Securedrop use case a small wrapper add-on around OpenPGP.js could be sufficient.
While @fpietrosanti makes a good point above about leveraging existing work, I am somewhat reluctant to extend a larger, complex piece of software like Mailvelope to do something outside of its intended use case - especially when, as you say, a small wrapper add-on is probably sufficient. I am not opposed, however, and that is something we could consider further.
Again, our desire to have an addon included by default is to improve plausible deniability. All TBB users > All TBB users with Mailvelope installed (unless it is included by default).
Can Mailvelope encrypt attachments?
@garrettr Let me comment on the two points:
I don't think that we can avoid the root of trust to be the SecureDrop backend:
2.1 TOFU mechanism expose the Whistleblower given the need to write some local data to the browser LocalStorage, effectively introducing additional risks like leaving a disk trace that the Whistleblower made a submission.
2.1 I don't see the risks of MITM given the Tor transport. However i see the risks of the keys being actively tweaked by the administrator of the SecureDrop backend, that cannot be effectively mitigated in a way, other than relying on a trusted third party. If the SecureDrop backend tweak the keys, it will acquire the data.
So IMHO the "practical and effective root of trust" always rely on the backend.
@garrettr
While @fpietrosanti makes a good point above about leveraging existing work, I am somewhat reluctant to extend a larger, complex piece of software like Mailvelope to do something outside of its intended use case
Probably it depends on how much key management functionality you will need on the UI side. If you speak of TOFU then keys need to be verified by the user. The pinning of trusted keys and maybe grouping of keys are features that I also would like to see in Mailvelope. So there could be some synergy.
Can Mailvelope encrypt attachments?
This is planned for version 0.9 in Q1/2014. It requires asynchronous encryption with web workers as otherwise the browser stalls for large attachments. Also the current mainline OpenPGP.js does not support this feature, although there are attempts in this area.
I'm reviving this thread because, based on in-person talks with @garrettr and @dtauerbach, it sounds like we're converging on eventually encrypting data client-side in the browser with the journalist's and/or organization server's public key.
Note that this doesn't require the source to have a private key at all, which simplifies the problem significantly. As long as we're not decrypting replies from the journalist to the source on the client-side, we don't need to worry about storing private keys on the client.
My first-pass implementation at this, until we get a signed extension into the TBB, would be:
var keypair = sjcl.ecc.elGamal.generateKeys(keypairCurve, 0);
var pubKey = JSON.stringify(keypair.pub.serialize());
Then we'd export pubKey
on a USB stick to the app server, insert it into config.py, and deliver it in the client encryption javascript file (let's call it encrypt.js for now).
On the client:
var ciphertext = sjcl.encrypt(pubKey, plaintext, options);
Assuming others agree that we can allow javascript to run on that particular page, I think we should go ahead and implement this, given that the TBB browser extension might not happen for a while.
As @garrettr and @Hainish have pointed out, the approach above doesn't solve the problem that host-served javascript can be compromised more easily than a TBB-included browser extension. However, it's still a significant improvement over the current design because client-side js can be audited by anyone, not just people who have access to the app server.
Here's some ways I can think of making host-served javascript more easily auditable:
Perhaps related to this discussion, speaking to certificate authorit(ies) issue(s), any thoughts re. use of TACK?
@ABISprotocol TACK is cool, but unrelated to this discussion.
@diracdeltas Lots of good thoughts there. A couple of issues, though:
One idea might be to do host-served crypto, but make the first draft of the TBB add-on do the verification. That is quite straightforward and should be easy to write and get approved for inclusion. There a lot of design possibilities here and this needs to be discussed in depth.
@garrettr
1.Right, but my plan was to do SJCL decryption in the browser on the SVS so we can make the UI in HTML + CSS. I'm also assuming the SJCL-generated key isn't GPG compatible anyway, so we'd have to have the server admin make two keys. Maybe in the future we'll decide to drop GPG because it's a somewhat overly complex piece of software for our needs.
Regarding a TBB extension for verification, I had the same idea. It's nice because that extension would be useful for any website that delivers files that shouldn't change very often.
I'd assumed that the other useful part of having an extension is the extension's access to local storage for the user's private keys.
What client-side JS crypto library should we use?
Here's a post in favor of SJCL: https://github.com/SpiderOak/crypton/issues/26
@diracdeltas SJCL could be a good choice. Again, for performance it might be good to use asm.js (Emscripten-compiled native libraries). This blog post is another good take on "what library should we use to do crypto in the browser?"
Generally, I think it's most important that we first define a generic API that can be utilized by a variety of clients (browser plugins, native desktop or mobile apps, etc.) A design document for such an API, and the accompanying protocol, is in progress.
Yesterday at FOSDEM I had a chat with @tasn, who has written a nice browser extension that verifies the PGP signature of web pages. This is something that is worth experimenting with in the context of SecureDrop. In SecureDrop releases, we'd ship JavaScript that encrypts submissions client-side to the instance's public key, and this js code would be signed with the SecureDrop release key. The browser extension would verify the sig and only execute the JavaScript if the signature verifies (we'd need the release key baked into the extension). We'd fall back to server-side crypto for sources that have JavaScript turned off entirely. We'd also (eventually) need to get this browser extension bundled into Tor Browser.
This doesn't address the problem that a malicious server can replace the submission key on the server with an attacker controlled one, though we can detect this using OSSEC, and alert on the replacement of the key such that admins can respond. This would be a significant improvement over the current situation, where a very careful attacker (i.e. careful not to trigger any OSSEC alerts) that is able to compromise the application server can read submissions from memory without being detected.
@redshiftzero covered almost everything, I just have a few comments.
The extension verifies user controlled websites. This means, that users can add website + pubkey combinations as they please. I plan on adding a preloaded list of trusted services and their corresponding keys, and would love to add securedrop once you are ready.
You probably know better if it makes sense, but in my mind I see two alternative ways of using this extension with securedrop. The first is you signing your HTML (for the extension) and instances, e.g. NYT, just upload it as is. This is the easiest solution, and will let users verify the code is really from securedrop. An alternative solution would be to have your instances (again, e.g. NYT) sign the HTML themselves (for the extension) with their pubkey already embedded in the HTML (or external JS file) which will solve your malicious server key verification issue you just raised. An even additional solution, which is not currently implemented, would be to verify other requests than just the main HTML (everything else is verified by the browser using SRI). For example, the extension could try and verify XHRs too (to paths that match), and thus be able to verify a configuration json, for example.
I understand you'd like to use this extension in order to support client side encryption, which is great, and what the extension was made for (I created it for EteSync's web client), but I think you could already benefit from it, given the sensitive nature of the project. For example, attackers with the ability to modify files on the server (but not sniff transport), could at the moment change the form's target to a server controlled by them and steal data this way. This extension will prevent that.
If there's anything I can do to help with integrating the extension, or if you have any suggestions or queries regarding the extensions, please let me know.
Added "prototyping" to title to clarify that's what we're committing to for now.
Looking towards the future ECDH key pairs could be generated in the trusted crypto VM on the Qubes Reading Room Workstation (RR). A large number of the ECDH public keys, all signed by the long-term ECDSA identity key of the (RR), are sent to the server through a networked VM a series of them really).
Clients get served unique* ECDH public keys, verify the signature over them, derive a shared symmetric key for AEAD, and use that to encrypt a document/ message. The client then uploads a tuple (iv, ciphertext, tag, pub_client, pub_server, sig)
, where:
(iv, ciphertext, tag)
are as in standard AEAD schemes.pub_client
is the public ECDH key the client generates when deriving the shared symmetric key.pub_server
is the public ECDH key the server served the client that was generated by the RR (for easy lookup of the corresponding private key on the RR).sig
is a signature by the ECDSA key derived client-side from the source codename over all the other values. This helps defend against certain attacks like some types of replay attacks, and allows the RR client to definitively link submissions from the same source, while preventing the server from linking submissions sent over different Tor circuits (the long-term source ECDSA public key need only be sent once, encrypted, to the RR).So this straightforward hybrid-encryption scheme provides forward secrecy and a measure of sender unlinkability, and the crypto is pretty straightforward to implement. Honestly, the harder part of the implementation will be due to the complicated security architecture of SD where instead of client-server we're dealing with CryptoVM-NetworkedVM-server-client.
I glanced over some finer details here, including how to achieve forward secrecy for replies and how it might be possible to likewise add a measure of receiver unlinkability (although that seems harder), but would be happy to flesh this out more and even write a formal spec that I could have smarter cryptographers than I help with/ verify if the SD team is ever serious about implementing this. The above builds on some of the ideas in #3281.
Cross-referencing: "An End-to-End Encryption Scheme for SecureDrop" (May 2018 student course paper). Unfortunately I wasn't able to find a repo for the example extension code referenced in the paper.
Hey, I've been thinking about this ticket, and am definitely part of the "javascript is risky to enable" camp. I can definitely see some promise with using a plugin that can validate that the javascript code is signed, but would still prefer having a self contained browser plugin. tasn/redshiftzero/others brought up a few ways the js signing could work, sorry for rehashing statements.
1) Each deployment will have its own key for clients to encrypt with and this needs to be baked into a request somewhere so the client knows what that key is. Users will copy this into the plugin before running the encrypted upload functionality. This is the basis of the paper that eloquence listed, and probably the ideal way to go if the PGP js verification is how it ends up working.
2) The plugin "pins" keys and acts as a trusted authority. This requires Freedom of Press/someone to load public keys for each new SD deployment into the plugin. I can't think of a way to do that without being a huge hassle, and would require trust of the third party who owns the plugin.
3) One method that could make 2) successful is if there is some additional infrastructure where a SD deployment could have their code+key signed by a Freedom of Press master key before launching. Thus only one key is needed to be baked into the plugin to validate the js crypto and it supports signing the encrypted files with different keys. Is that something that is reasonable for Freedom of the Press to provide? It does add some overhead on infrastructure to support this, so I could understand not being able to support it. (Whether for technical or legal reasonings)
As stated earlier, a generic browser plugin that does the encryption for you might be the best option. The user doesn’t need to enable any javascript, and the largest risk I can see here is a mitm replacing the public key sent to the user with one the attacker controls. (Via a server compromise) This does not place them in any worse scenario than they are in with the current setup, and requires an active attacker. Redshiftzero mentioned earlier that this could potentially be monitored with OSSEC controls.
Is there any reason against just having a standalone “PGP encrypt file” browser plugin that I missed? It should be generic enough that it wouldn’t be SD specific and provides all the functionality without figuring out how to manage code signing across deployments.
What do you mean by a standalone "PGP encrypt file"? If you mean just a generic browser extension that validates generic pages using normal PGP signatures with normal PGP keys, that's what Signed Pages is (the extension mentioned previously in this thread).
@tasn in this case PGP would be used to encrypt files before uploading them.
@zenmonkeykstop, oops, thanks for the clarification. I can see the confusion now upon re-reading the thread. I thought he was talking about the signature verification, but instead what he was talking about is having a plugin that encrypts the files being uploaded before they even hit the page. Sorry for the noise.
As for the comment: it looks like this solves the uploading of the files problem quite well, but I think there's still value in verifying the integrity of the page to prevent the running of unapproved javascript that could be used for e.g. fingerprinting.
One other point that just occurred to me about client-side encryption of submissions, is that server-side submissions are gzipped before gpg encryption - I'd imagine to ease the pain of large file transfers over Tor. As HTTP compression isn't going to help with gpg-encrypted files, a client-side solution is either going to have to do something similar or deal with said pain.
(This is a minor detail compared to stuff above, obvs.)
@lev-csouffrant Any update on your prototyping work? I see the public repo at https://github.com/lev-csouffrant/Uplocker , should we consider that the final state of your prototyping effort, or are you still planning to do further work on it? Thanks :)
Hey eloquence, yeah that protoype is final-ish state, and we will see how much free time I can put into updating the last few important pieces (i.e. testing and packaging). Otherwise, it works as a proof of concept for now as a browser plugin that encrypts files via a PGP key (passed to the plugin via a HTML meta tag). I also handled compressing the files before encrypting them as @zenmonkeykstop suggested. Compressing encrypted files is not going to help much, so if there is going to be any compression is should probably be done before the encryption phase occurs.
One thing I am worried about after writing this is the memory usage for files. You need one copy to be stored in memory for the encryption to run on (There's a streaming file capability but it didn't look like it was supported in the version of Firefox TOR browser uses). The encrypted file will also need a copy in memory. If compression is going to be supported that is a third copy that will be stored. Additionally, you need to transfer it from a content-script to background due to limitations on what each portion can run. I used a transferable object to pass between them which should mean that it is not putting a fourth copy into memory...
With 500MB max per file, that means at minimum it will probably need 1-2GB memory just for the file itself because of all of this. Maybe someone smarter at js stuff can chip in on if there's a better way to handle this?
Right now as I understand it, the source uploads a sensitive document, that document is sent over Tor to the hidden service running on the source server, that source server encrypts the document, and it is only decrypted on the SVS. This means that if the source server is somehow compromised, an attacker could recover the plaintext of the document before it is encrypted.
Channeling some of the feedback from Patrick Ball at the techno-activism event tonight, it might make sense to instead encrypt on the client with the public key of the system. That way, if the source server is compromised, the data will still be protected, so long as the SVS is secure. Since the SVS has a higher security model than the source server.
The way that was suggested to accomplish this is via a browser extension, or baking keys into the browser. In addition to being a lot of work, this brings up the whole can of worms that comes with key distribution (e.g. does the browser extension/patch server as a CA?)
In the shorter term, one could just provide the public key with Javascript, and encrypt the document using it before sending it to the source server. There are two issues I see with this: first, adding Javascript may open up an attack vector if no Javascript is being used right now. Second, the attacker we've presumed to have control of the source server could modify the Javascript to include a different public key. The second problem I think is solvable with a super basic browser add-on or something that detects when a client sees unexpected Javascript. Not all clients have to run this. Given the attacker does not who has submitted documents, she must attack everyone to attack her target. That means even if a small percentage of people run the testing add-on, it will still make an effective attack (against everyone) detectable.
[There should be a separate bug for if and how to move the conversation with the journalist to use a somewhat similar client-side approach.]