freedomofpress / securedrop

GitHub repository for the SecureDrop whistleblower platform. Do not submit tips here!
https://securedrop.org/
Other
3.62k stars 685 forks source link

Prototype encrypting data client-side with the system's public key #92

Open dtauerbach opened 11 years ago

dtauerbach commented 11 years ago

Right now as I understand it, the source uploads a sensitive document, that document is sent over Tor to the hidden service running on the source server, that source server encrypts the document, and it is only decrypted on the SVS. This means that if the source server is somehow compromised, an attacker could recover the plaintext of the document before it is encrypted.

Channeling some of the feedback from Patrick Ball at the techno-activism event tonight, it might make sense to instead encrypt on the client with the public key of the system. That way, if the source server is compromised, the data will still be protected, so long as the SVS is secure. Since the SVS has a higher security model than the source server.

The way that was suggested to accomplish this is via a browser extension, or baking keys into the browser. In addition to being a lot of work, this brings up the whole can of worms that comes with key distribution (e.g. does the browser extension/patch server as a CA?)

In the shorter term, one could just provide the public key with Javascript, and encrypt the document using it before sending it to the source server. There are two issues I see with this: first, adding Javascript may open up an attack vector if no Javascript is being used right now. Second, the attacker we've presumed to have control of the source server could modify the Javascript to include a different public key. The second problem I think is solvable with a super basic browser add-on or something that detects when a client sees unexpected Javascript. Not all clients have to run this. Given the attacker does not who has submitted documents, she must attack everyone to attack her target. That means even if a small percentage of people run the testing add-on, it will still make an effective attack (against everyone) detectable.

[There should be a separate bug for if and how to move the conversation with the journalist to use a somewhat similar client-side approach.]

fpietrosanti commented 11 years ago

I feel that the Security model of encryption with the node keys is not the best one and SD should switch to The same approach used by globaleaks, encrypting with the recipients keys. Anyhow leveraging OpenPGP.js is a good strategy, i'm following the project and in the past 1 year it improved a lot! Adding client side crypto with server provided keys will add a bit of perfect forward secrecy to the communication exchange, however it does need to use javascript on the submission interface. In globaleaks the submission client is fully JS but i don't know if in the SD threat model is acceptable to use javascript on submission interface.

micahflee commented 11 years ago

Oops, didn't mean to close this!

klpwired commented 11 years ago

Just say no to Javascript crypto.

If the server is compromised to capture plaintext documents, it could just as easily be compromised to corrupt the javascript crypto code served to the source. So the gains are illusory. In the meantime, you'd be forcing (or at least encouraging) sources to turn off NoScript, making them vastly more vulnerable to Freedom Hosting-style malware.

fpietrosanti commented 11 years ago

@klpwired The Javascript crypto give you certain values related to PFS within the respect to the following real-risk-context:

So, Javascript crypto is valuable given that you properly assess the kind of protection that will provide.

Anyhow you must consider that in SD, the default browser to be used is Tor Browser Bundle. Tor Browser Bundle have Javascript enabled by default and i expect that no whistleblowers would be ever change the default configuration.

If you like to keep the philosophical choice of "keeping javascript off", i just agree to disagree :-)

diracdeltas commented 11 years ago

@klpwired @dtauerbach @fpietrosanti We're already using Javascript on the source interface (jQuery).

@klpwired Re: "If the server is compromised to capture plaintext documents, it could just as easily be compromised to corrupt the javascript crypto code served to the source," I don't agree because presumably we would perform the client-side javascript crypto in a browser extension, which the client would have to download [1] before using the source website. This actually provides extra security against server corruption, because the client would have the ability to check the source code of the extension and make sure their documents are actually being encrypted. You can essentially think of the browser extension as an OS-independent, user-friendly front-end to OpenPGP. It could be as safe as using GPG offline if the user doesn't have malware in their browser.

(thanks to @Hainish for having this conversation with me last night and bringing up some of these points.)

[1] Either we could bundle the extension with the Tor browser bundle, or the client could download it over Tor separately as a signed, deterministically-built package. We need to be careful that simply having the extension in your browser doesn't single you out as a securedrop source!

dtauerbach commented 11 years ago

I agree that there are dangers to turning on host-served Javascript and using Javascript crypto libraries. But I think the analysis deserves a more nuanced treatment. In particular, host-served Javascript can be compromised, but is also auditable. Suppose an attacker has compromised the source server, and can send malicious Javascript. If the client gets anything except the expected Javascript, it has the opportunity to raise a red flag and fail, or, perhaps more importantly, detect that the server has been compromised. It is much more difficult for the attacker to target particular individuals given that TBB is being used, so even if a handful of clients are doing this auditing due diligence, this raises the cost of serving malicious Javascript quite a lot. On the other hand, if the encryption happens server-side, then the attacker who has compromised the source server (but not the SVS) will simply have plaintext access to the documents and not have to raise extra audit flags by serving malicious Javascript.

There are serious downsides of course:

  1. We would be encouraging sources to use Javascript on this page, but should be discouraging the use of Javascript as much as possible.
  2. We would be relying on the security of a Javascript crypto library which may have serious vulnerabilities and operates in a totally insecure runtime environment. But, taking a closer look, the only function the library is serving is to encrypt to a public key. Let's put a pin in the issue of whether or not this encryption happens correctly, as that is discussed below. The form submission already contains data that we should assume is malicious. Moreover, any outside Javascript that could affect this page (for example, through a malicious add-on that the TBB users install) will mostly likely be able to affect the form submission too, and could, say, swap out the real file the source wants to submit with a malicious one. Still, there may be a narrow class of browser exploits that give access to another page's Javascript runtime, but NOT to the DOM of the other page. Moving to Javascript introduces an attack vector here.
  3. There is more of a chance for things to go wrong. Security issues aside, if the encryption happens incorrectly -- say, due to an add-on a source has that interferes with OpenPGP.js -- then the source will think she has submitted a document, and only when it gets to the SVS will the journalist realize that it cannot be read.
  4. Encrypting large documents may take significant time, which is another barrier that raises the cost to the source of submitting a document and makes a submission less likely.
  5. This adds complexity to the client, which we want to be as simple as possible.

I'd suggest it's worth thinking through carefully. Empirical data could be gathered about downsides 1-4 in order to weigh them against the upside. Client-side encryption provides a major benefit, and makes the increased security of the air-gapped SVS much more significant. And a longer term solution to consider would be to create a browser add-on that ships with TBB. That way the Javascript isn't host-served, but there's stil a question of how the public keys of the SecureDrop sites get into the add-on -- the host could send the public key, but there would have to be some way to establish trust in that key.

klpwired commented 11 years ago

Well, not to strip the nuance away, but unauthorized plaintext access to documents being leaked to a journalist for publication is not the primary threat. De-anonymization of the source is. Making the system Javascript-dependent increases the risk to the source's anonymity in order to provide (again, illusory) gains in document confidentiality-- a distant second in importance.

Taipo commented 11 years ago

Tor Browser is not the only way people will be using the SD either, Tor2Web users could well be using Chrome or even worse, Internet Explorer to access an SD. You will also have sources using throwaway internet ready cell phones as well which can access through either Tor2Web, or Orbot.

While I am not adverse to the use of javascript for some front end functions, but there are social issues with encrypting client side with javascript even more so when having to add on an extension to do so.

Consider scope of potential sources ranges from the technophobes to Snowdens. Then ask these two rhetorical question:

Could an Edward Snowden type whistleblower ( or in fact anyone who has read the leaks concerning EGOTISTICALGIRAFFE ) be put off using a dead drop system that employed client side javascript to encrypt files knowing that the vast majority of NSA efforts is focused on browser hijacking of firefox shipped with TOR? ( in fact anyone with contacts with him could very well ask the real Snowden of his thoughts on this issue )

Could a Technophobe be put off by the extra added step in the situation where the extension had to be manually installed, or at least the public key rather than be presented with the common select file field all computer users have become accustomed to.

Source de-anonymisation is the number one threat if it comes down to a weighing exercise.

fpietrosanti commented 11 years ago

@Taipo In SD threat model Tor2web is not contemplated, it is on the GlobaLeaks one. We need to see which will be the decision regarding #43 but i expect that following SD philosophy there will be no compromise. Please consider that most whistleblowers are technologically unskilled and a little bit dumb, so the main effort is to try to protect them from their own mistakes, not from NSA.

@klpwired If de-anonimization of the source is the main risk, then you need to have very usable user interface, with super-strong-and-useful awareness information. To do so, you will need fancy UI with some major JS framework and a proper usability study made by UX design experts based on emotional-design concepts. Social risks are much more relevant than technological risks, IMHO.

Taipo commented 11 years ago

@fpietrosanti My point about Tor2Web is that it allows a user to access an SD using a wider variety of web browsers other than Firefox so any GPG encryption extensions would need to be available across a much wider range or browsers, or else browser brand restrictions are needed. I agree with you about technologically unskilled whistleblowers. That is basically what a 'Technophobe' is, its a slang word for the same thing, my apologies for the language barrier issues ( perhaps ).

Hainish commented 11 years ago

I've been having this conversation on the securedrop-dev mailing list, I've copied my conversation with Patrick Ball:

Date: Mon, 21 Oct 2013 19:28:51 -0700 From: Patrick Ball pball@hrdag.org To: bill@eff.org, Seth David Schoen schoen@eff.org, Micah Lee micahflee@riseup.net Subject: SecureDrop X-Mailer: MailMate (1.7r3790)

hi Seth, Bill, and Micah,

My concern is essentially the same as the audit's final bullet in 3.4. In short, this doesn't look to me much safer than HushMail or any other host-based approach. If you can compromise the DD Source Server, the content of the message (but not the source's communications metadata, thanks to Tor) would be exposed to the attacker.

The solution that seems to me safest to the host-based-attack I proposed in the discussion tonight is to move the source's encryption into the human source's browser. The guy at the discussion tonight who has hacked on OpenPGP.js said that you have to secure the whole javascript stack, and that's true, but:

If the server has to inject evil javascript in order to compromise encryption done in OpenPGP.js implementation in the Tor browser bundle, then the evil javascript gets exposed to every visitor. That makes the evil javascript at least potentially detectable. I think it's a big win to force the attack to be visible (even if heavily obfuscated) to the user -- as opposed to being completely invisible by evil code running deep in the server.

I think that encrypted public and private keys could be stored on the DD Source Server if all the encryption and decryption -- of keys and of content -- happened on the Human Source's computer. This way the source can deny that she is interacting with SecureDrop.

Danny's point that Tor doesn't want to implement anything special for you or for anyone is a good and important point. However, I would think you can finesse this by asking the Tor browser people to include in the browser basically generic crypto tools that could be used for any host-based crypto system. That would include a fairly obvious API, including the encryption/decryption parts, including potentially some way to audit for at least some kinds of evil code. We can then play whack-a-mole with evil code.

I had a long conversation with Ben Adida about this a couple of years ago, and he concluded then that it's impossible to completely secure. This said, I still think that it might be possible to move the attack into a visible place.

hope this helps -- PB.


Patrick Ball Executive Director, Human Rights Data Analysis Group https://hrdag.org

Hainish commented 11 years ago

Date: Mon, 21 Oct 2013 21:32:48 -0700 From: William Budington bill@eff.org To: Patrick Ball pball@hrdag.org Cc: Seth David Schoen schoen@eff.org, Micah Lee micahflee@riseup.net Subject: Re: SecureDrop User-Agent: Mutt/1.5.21 (2010-09-15)

Hey Patrick,

I definitely like the idea of the encryption being done on the client side.
The problem with Hushmail wasn't that it was doing encryption on the server side. Hushmail was actually doing encryption on the client side, but with a Java application rather than javascript. The problem was that in the delivery of this application, it was including modified code to certain target IPs. Since we would be delivering the application to anonymized clients via the Tor Browser Bundle, an insertion of malicious code could not be targeted at certain IPs, and would be forced to be a blanket delivery, thus risking exposition. Of course, a carefully timed attack could still be performed, but this would require knowing when the was going to log on and performing the attack in a very narrow timeframe to reduce risk of exposition.

But just because an attack is exposable doesn't mean that it will be exposed. It is unlikely that for all delivered instances of the code, someone will do anything approaching a security audit. The only way I can see to prevent this is actually having a browser extension that is versioned and signed by a trusted source. Of course Tor has apprehensions about accepting browser plugins liberally, which is understandable, but they may be inclined to make an exception in the case of SecureDrop. But regardless, I feel that the entire application would have to live within the extension, not just certain cryptographic primitives that are exposed through the browser. This has the same problem as HushMail.

The current problem is actually that the version of Firefox that the Tor Browser Bundle uses cannot be used for cryptographic purposes, since it (Firefox ESR 17.0.9 at the time of writing) does not provide access to the newer API for random values in the browser, window.crypto.getRandomValues.
As you said, this shouldn't be a showblocker and we should start development on a browser application anyway, in preparation for the day Tor does start working with a newer version of Firefox or Chrome, and I agree with that.

Bill

Hainish commented 11 years ago

Date: Tue, 22 Oct 2013 09:49:55 -0700 From: Patrick Ball pball@hrdag.org To: William Budington bill@eff.org Cc: Seth David Schoen schoen@eff.org, Micah Lee micahflee@riseup.net Subject: Re: SecureDrop X-Mailer: MailMate (1.7r3790)

hi Bill,

first off, yes, certainly you may use anything from this thread in any way that might benefit SecureDrop. More inline:

On 21 Oct 2013, at 21:32, William Budington wrote:

Hey Patrick,

I definitely like the idea of the encryption being done on the client side. The problem with Hushmail wasn't that it was doing encryption on the server side. Hushmail was actually doing encryption on the client side, but with a Java application rather than javascript. The problem was that in the delivery of this application, it was including modified code to certain target IPs. Since we would be delivering the application to anonymized clients via the Tor Browser Bundle, an insertion of malicious code could not be targeted at certain IPs, and would be forced to be a blanket delivery, thus risking exposition. Of course, a carefully timed attack could still be performed, but this would require knowing when the was going to log on and performing the attack in a very narrow timeframe to reduce risk of exposition.

I know that Hushmail was encrypting client side, but by "host-based approach," I specifically mean any attack that a compromised server can direct at an identifiable user. I whined about this a lot in an article in Wired last summer.

But just because an attack is exposable doesn't mean that it will be exposed.

Of course not. But a non-exposable attack has zero chance of being detected.

It is unlikely that for all delivered instances of the code, someone will do anything approaching a security audit.

True, but there might be a way to detect a necessarily incomplete but possibly growable set of attacks in an automated way.

It's not clear to me that a perfect system can be built, but an imperfect system that improves on the current approach while creating an evolving problem for attackers seems to me like it would be a win.

The only way I can see to prevent this is actually having a browser extension that is versioned and signed by a trusted source. Of course Tor has apprehensions about accepting browser plugins liberally, which is understandable, but they may be inclined to make an exception in the case of SecureDrop.

I think they'd be way more open to it if the add-on were somehow generalizable to any host-based crypto system.

I am convinced by Danny's point that having a SecureDrop-specific extension on one's machine is too incriminating for the user; and distributing such an add-on ties Tor too closely to leaking. Either problem is a deal-breaker, I think, and together, well, I doubt it's possible to sell.

But regardless, I feel that the entire application would have to live within the extension, not just certain cryptographic primitives that are exposed through the browser. This has the same problem as HushMail.

I don't think the primitives approach has the same attack surface as HushMail (or SilentCircle). The attack on built-in primitives has to be in the server's javascript that somehow misuses the primitives, which I think is much more detectable than breaking the primitives (which could be crazily subtle).

The current problem is actually that the version of Firefox that the Tor Browser Bundle uses cannot be used for cryptographic purposes, since it (Firefox ESR 17.0.9 at the time of writing) does not provide access to the newer API for random values in the browser, window.crypto.getRandomValues.
As you said, this shouldn't be a showblocker and we should start development on a browser application anyway, in preparation for the day Tor does start working with a newer version of Firefox or Chrome, and I agree with that.

Good luck! and I look forward to following the developments -- PB.

fpietrosanti commented 11 years ago

Regarding specific OpenPGP.JS threat model/uses please loin http://list.openpgpjs.org/ where those kind of discussions are usual every month! Also look at the recently OpenTechnologyFund funded MailVelope project http://mailvelope.com that could be a nice target for improvements and uses, being set for broad usage (plausible deniability) and being well funded in it's r&d plan.

dtauerbach commented 11 years ago

Thanks Bill. One option that I just discussed with @micahflee would be to encrypt client-side if and only if the user has Javascript running; if it is not running, display an alert of some sort encouraging the user to encrypt the documents herself before submitting.

In terms of threats, I don't think there is a big delta between "attacker having plaintext access to documents" and "attacker being able to identify the source" -- I think the documents will often be the most identifying piece of information about the source, perhaps more identifying than having root on the computer used to leak. I also don't necessarily agree that Snowden would be turned off by the idea of client-side Javascript-based cryptography but NOT by the idea that the submission platform has you send the documents in plaintext to a host, instead of encrypting them in a way that they can only be decrypted via an SVS.

I don't know what the right answer is, but I think this issue deserves careful consideration.

garrettr commented 11 years ago

Unauthorized plaintext access to documents being leaked to a journalist for publication is not the primary threat. De-anonymization of the source is.

@klpwired The concern here is that plaintext access to documents may lead to de-anonymization of the source due to identifying metadata in the documents.

We're already using Javascript on the source interface (jQuery).

@diracdeltas As I expressed on the mailing list, I do not believe that change (to allow sources to customize the number of words in their codename) has a good usability/security tradeoff. Given what we know about how NSA tries to de-anonymize Tor users, I think we should be encouraging users to disable JS. The only reason I accepted that change is because the codename chooser gracefully degrades and is still functional with JS disabled.

I do not think we should add any functionality that requires JS, and the current existence of JS in the tree should not normalize its further use (without careful consideration).

In particular, host-served Javascript can be compromised, but is also auditable.

@dtauerbach As long as it is being served in a signed browser extension, I agree - but this has serious usability problems (although bundling it in TBB would help a lot).

In the end, I agree with @klpwired above. If an adversary could compromise our server to the degree that they could access the plaintext of documents being uploaded, then they could also serve a JS-based exploit. This would be much more likely to succeed because while uploaded documents might have identifying metadata, a successful exploit on the client's machine would certainly lead to de-anonymization. Therefore I think we should focus on securing our server and encouraging users to minimize their attack surface by disabling Javascript.

Hainish commented 11 years ago

I don't think it likely that the TBB will include a browser plugin for SecureDrop for a number of reasons. Firstly, every additional plugin is an additional vector for attack of all TBB users, not just ones that want to leak documents. I don't think they would want to expose their users to that risk. Secondly, it would imply that the TBB is a tool for leaking documents, which is not what they're going for. I think it may be unreasonable to ask the TBB to include such a plugin. That being said, due to the fact of the leaker being anonymized by the TBB, for malicious code injection to be successful it would have to be applied in a blanketed fashion. As I stated above, a timed attack could be performed with some effort, if you know exactly when a leaker is going to leak documents, but to decrease the risk of exposition it would have to be in a narrow time frame.

As an alternative to a TBB plugin, I think we can develop an additional piece of infrastructure, let's call it a "SecureDrop Directory Server." (SDDS) This server could periodically check the SecureDrop running instances for their HTML and Javascript. Since it is a request over the Tor network, the SecureDrop server could not differentiate between a SDDS and a real leaking client, thus avoiding the HushMail problem of providing a malicious application to specified IPs. The SDDS then verifies if the set of HTML and JS returned is a verified instance of SecureDrop. This would make detection of malicious SecureDrop instances streamlined, and we could create a directory page that de-lists instances that are not verified (or even instances that are too old and security vulns have been found for). Providing that the SDDS headers can't be fingerprinted (and we'd have to provide the same headers as the TBB in our requests) this would eliminate the timed attack vector.

In addition, the SDDS could be provided a list of public keys for running instances of SD servers, so the attack above that Dan mentioned (the JS providing a MITMed public key) could also be eliminated by having these SDDS servers.

One criticism I've heard of this model is basically centralized. But it doesn't have to be, anyone can run an SDDS, including Freedom of the Press Foundation and any other organizations that wish to be guardians of the sanctity of SecureDrop servers.

As a side-note, above I mentioned that the TBB currently does not support window.crypto.getRandomValues. I talked to Mike Perry and he mentioned that before December 2nd, they will be upgrading to FF 24, which does indeed provide the secure RNG API. This means that we can conceivably in the near future provide a client-side application for encrypting documents to the journalist.

garrettr commented 11 years ago

by asking the Tor browser people to include in the browser basically generic crypto tools that could be used for any host-based crypto system. That would include a fairly obvious API, including the encryption/decryption parts, including potentially some way to audit for at least some kinds of evil code.

In-browser "generic crypto tools" is the goal of the W3C Web Crypto Working Group. This is still in development and it is unclear when it will be ready to be implemented. "Ways to audit evil code" is specifically mentioned as a use case here. The TBB developers have in the past entertained this idea, although it would be nontrivial and who knows what they would say now.

Ultimately the problem is one of establishing a trust anchor if you want this to be automated. If you don't want to involve the user, you would have to either TOFU or do something similar to pinning. Otherwise you can get the user involved, which offloads the burden onto them (with concomitant risks).

Since it (Firefox ESR 17.0.9 at the time of writing) does not provide access to the newer API for random values in the browser, window.crypto.getRandomValues.

We just released the new ESR, which is based on Firefox 24 and has window.crypto.getRandomValues. The Tor devs are working on updating their patches to release a new TBB based on 24 (timeline unknown). We are also working on a broader initiative to integrate as many of the TBB patches into Firefox as possible, so future TBB's can be based on the stable release and we can be equally agile in responding to exploits.

garrettr commented 11 years ago

I talked to Mike Perry and he mentioned that before December 2nd, they will be upgrading to FF 24, which does indeed provide the secure RNG API. This means that we can conceivably in the near future provide a client-side application for encrypting documents to the journalist.

Nice one, @Hainish !

Hainish commented 11 years ago

Correction, TBB based on FF 24 by Dec 10th:

11:47:20 mikeperry by dec 2nd, all TBBs should be based on FF24 11:48:22 intrigeri mikeperry: 2nd, really? In my understanding, the closest FF release is Dec. 10. 12:15:22 mikeperry intrigeri: https://wiki.mozilla.org/RapidRelease/Calendar seems to indicate you're right

dtauerbach commented 11 years ago

OK, a quick recap:

Myself, Patrick, @Hainish, @fpietrosanti seem to favor exploring a host-delivered Javascript approach, trying to maximize the auditability/security of the untrusted code, and noting this will only be possible Dec 10 after TBB migrates to the new Firefox ESR.

@klpwired, @Taipo, @garrettr warn against requiring a user to use Javascript (I agree). Would the 3 of you -- or anyone -- like to weigh in on whether you would consider non-required host-delivered Javascript? If a user is not running it, we could have a message suggesting that additional encryption may be helpful. There are other concerns with this approach too -- I tried to enumerate them above.

In addition to the host-based Javascript question, there has been discussion by @diracdeltas and others about shipping an extension with TBB, or otherwise requiring a signed extension, and having that extension responsible for Javascript (so that it is not delivered by the host). This is more work, and poses several additional problems: deniability if source comp is compromised, key management, etc. But it has the big advantage of not relying on Javascript delivered by the host.

Have I missed anything important?

klpwired commented 11 years ago

Great recap, @dtauerbach.

I'd still consider non-required host-delivered Javascript harmful. It trains users in the wrong direction. Users should be blocking Javascript (and Flash, ActiveX, Java, Silverlight, whatever) from SecureDrop sites, so that if the host is compromised, the risk of the host successfully delivering malware to the user is minimal. IMO, the best use of Javascript would be: window.alert("You should turn off Javascript");

garrettr commented 11 years ago

@dtauerbach +1 on the recap.

In an ideal world, I agree that all encryption would be end-to-end from sources to journalists. Currently, there are too many open questions around Javascript cryptography for us to implement it. It is fine for projects like Cryptocat, which advertise their experimental nature and state up front "You should never trust any piece of software with your life, and Cryptocat is no exception". We are asking sources to take enormous risks to share information using our platform, and I think we can best serve them by being as cautious and conservative as possible in our design choices.

@klpwired I completely agree with your last comment, and have opened #100 and #101 to address it.

This is not to say that I think Securedrop could never encrypt data client-side using Javascript (using a browser extension, until someone solves the problem of securely delivering Javascript in an auditable manner). I would love to see experimental work in this direction. Perhaps it could be part of a 1.0 release sometime in the future!

dtauerbach commented 11 years ago

@garrettr @klpwired That seems like a reasonable decision, and I definitely agree that users are generally safer not browsing with Javascript (or Flash, or Java, etc). Still think it's worth being specific about the concerns. In this case, the main concern seems to be that we don't want to encourage users to turn on Javascript, to the point where we want to actively discourage them. That seems like a good idea to me. I listed other concerns above as well that folks haven't discussed. Are there others we've missed?

The reason specificity is important is twofold. First, for the project itself, I agree that being conservative makes sense but one should be conservative relative to one's design goals, not just generally afraid of doing any crypto via Javascript or in browsers. For example, suspend your disbelief and suppose the Tor Project made the TBB come with Javascript always-on with no option to turn off. Then I think that might change the decision above, despite the fact that the Javascript libraries used are still experimental and security guarantees of host-based systems are almost non-existent. The decision we've gone with for now for SecureDrop would be analogous to Cryptocat not performing any sort of end-to-end crypto at all (just a irc/jabber server). It's hard to argue that Cryptocat as a service is less secure than if Nadim just ran a jabber server equivalent, and this has empirically borne out as best I can tell with a cursory look at the bugs in the service that have been identified (e.g. http://tobtu.com/decryptocat.php; yes, they are bad, no they aren't worse than no encryption at all). So in this case, I think the real concern we've keyed in on is that users are less safe running Javascript and we want to actively discourage them, not that the Javascript crypto is too experimental to deploy from a security perspective*, given that the alternative is no e2e crypto at all.

Second, there is a lot of FUD about Javascript crypto. With the meteoric shift of software to the web, it's inevitable that most cryptography will take place in Javascript in browsers sooner than we'd like, if we'd like more than a tiny population to use crypto at all. Specificity allows us to productively move forward and identify showstoppers, to feed back into standards development.

fpietrosanti commented 11 years ago

@dtauerbach I totally agree that there is an excessive amount of FUD about Javascript and Javascript crypto, compared to the improved value and the effective context of use in anonymous whistleblowing technologies.

It's likely that 99.99% of use of a Tor Hidden Service website is done with the default TBB configuration, that have Javascript turned on, so if this assumption it's true, all the JS/non-JS discussion would be useless.

That's the reason GlobaLeaks started as a framework pure with Javascript application and the upcoming Chat and Messaging features are going to be full JS crypto based (with Cryptocat, OpenPGP.JS and Mailvelope): https://docs.google.com/document/d/1L8yVgarISeIxIvsFgoT3cF1MYzhEa6YyZzOAsAvR-yY/edit?usp=sharing

However, in order to satisfy the JS-related-sensibilities, we are going to implement a simplified GLClient that expose a submission interface with only HTML and interact with the GLBackend over it's submission API http://docs.globaleaks.apiary.io/ . Those set of security improvements would focus this project proposal: https://docs.google.com/document/d/15tyTSRKETzcamfgvZ4TOh9mzLV2STnQduZRKnG8fEZQ/edit?usp=sharing

fpietrosanti commented 11 years ago

I just opened a ticket Log statistics about javascript support of whistleblowers submitting information at #109 to collect objectively collected data about the effective use of No-Script on submission interface on live infrastructures.

nadimkobeissi commented 11 years ago

The amount of uneducated FUD regarding JS crypto, in this thread, is terrifying, especially considering the otherwise solid reputation of the people involved.

Guys, the concerns @klpwired has about JS crypto are solvable using a signed browser extension to deliver the code. Also, regarding your other concerns on the matter, please do read my blog post on JS crypto, which I hope will dispel a lot of the FUD in this thread.

fpietrosanti commented 11 years ago

@kaepora i understood, but i maybe wrong, that the concerns that @klpwired has is with the use of JS itself as a possible attack-vector.

nadimkobeissi commented 11 years ago

@fpietrosanti Yes, @klpwired's concerns of JS offering an attack vector similar to EgotisticalGiraffe are true and reasonable, but how many essential functional frameworks do you need to prune out to eliminate all attack vectors? Answer: all of them. My recommendation would be to, instead, actually understand the frameworks better, so you can gain their benefits while limiting the increase in attack surface that they offer.

fpietrosanti commented 11 years ago

@kaepora i think that it's worth to have some "numbers" about the usefulness of this Pure-HTML / JS-free submission interface approach. Otherwise that kind of discussion end up like usual happens as a philosophical discussion, with the final result to have different people that agree to disagree (#gunner rules) :-)

nadimkobeissi commented 11 years ago

@dtauerbach Side-note: keep in mind that Decryptocat was an implementation bug, not a bug that is inherent to the JavaScript language or environment. A similar bug could have happened in any other language just as easily (except possibly languages with strong typing); JS didn't contribute to its occurrence.

dtauerbach commented 11 years ago

@kaepora I agree completely about Decryptocat and didn't mean to imply otherwise. I am with you that we shouldn't view JS crypto as somehow doomed and have read your post and largely agree with the sentiment. But let's keep this thread focused on SecureDrop, and even more specifically host-based (non-browser-extension) system, and the predominant concern right now about users running Javascript. I agree with what I presume is your view that cutting out Javascript as an attack vector is unreasonable in general for contemporary software. However, for what we hope to be a secure submission platform, and given the hard EgotisticalGiraffe evidence of Javascript being used as an attack vector, I think it makes sense to maximally discourage users from browsing with Javascript with the TBB software that they will presumably be using for submissions.

@fpietrosanti numbers are always good, and if they show that we can't effectively discourage anyone, then it might make sense to revisit. In this case we should try to measure not only the prevalence of Javascript, but the effect of warning users "Stop using Javascript!"

fpietrosanti commented 11 years ago

@dtauerbach We got so many "horror story" about whistleblower's effective operational (in)security and average level of knowledge that they have by working with media and activists organizations using GlobaLeaks that i really cannot think that in the real world a warning on "Stop using Javascript" would make any kind of effect. I really feel that to improve the effective whistleblower security you need awareness and usability, not security-extremism. But still, with numbers it would be worth to analyze that kind of never ending discussion :-)

Taipo commented 11 years ago

Words like EGOTISTICALGIRAFFE and FOXACID are in reference to real teams of highly resourced exploiters dedicated to leveling exploits against the Firefox browser used in the TBB ( the achilles heal of TOR ) with an unknown arsenal of exploits, only one or two of which we know about, with the express purpose of unmasking a TOR user.

These in my opinion are real fears not disingenuous FUD ones and it is for these quite valid reasons that developers are rather weary and cautious about the use of user-required javascript when there are viable alternatives to be tested first.

Having said that, javascript is currently enabled by default in the TBB so its not a complete break in trust to require javascript to upload files if other alternatives turn out to be less than optimal.

Asking a user to install an extra add-on in order to use a file upload field might raise their chances of triggering a FOXACID selector at a later date in the likely event that FOXACID honeytraps be tailored to detect the fingerprints of TOR browsers with that specific extension installed.

garrettr commented 11 years ago

Oops!

garrettr commented 11 years ago

I am not categorically opposed to doing Javascript crypto. In an ideal world, we would do end-to-end encryption from source to journalist. This would also be beneficial because it would simplify the server design. I think JS crypto is the future, and I think @fpietrosanti, @kaepora and others are blazing a trail (thanks guys!)

But - if somebody posted a patch that did encrypted Securedrop submissions in the browser, using OpenPGP.js or similar, I am not sure I would advocate for merging it (even once TBB updates to have cryptographically secure random). There are two issues here.

One is increased attack surface. This is not FUD if your adversary is the NSA or the FBI. We know this from Snowden, and the Tor Freedom Hosting malware. The TFH malware used a JS exploit in Firefox specifically to de-anonymize large groups of Tor users - one of the worst-case scenarios for the Securdrop threat model. If users had Javascript disabled, it did not work. Javascript is not just like any other source of exploits. It is a complex, highly optimized, dynamic programming language runtime. It is especially ripe with possibilities for code injection.

The other is the safety of using crypto in the browser. Great progress has been made here - again, in large part thanks to people like @kaepora who have written the code and are iterating on the issues. There are still open questions, especially side channel attacks. I have talked to several JS engine developers about this, and nobody thought this was a non-issue. It's just an open question, and one that I'd like to see some research on before I ask sources to trust us and use it.

dtauerbach commented 11 years ago

I agree with issue #1. Of course I completely agree there should be a non-JS option. Whether there should be a JS option that does client side encryption depends on whether we can meaningfully affect whether or not users choose to run JS. I am optimistic we can have an effect here and get at least some users to turn it off, and given this it seems reasonable to me to conclude that it is better to have no JS option for now

For issue #2, I'd agree if the alternative were some sort of software running on the user's machine that did encryption more safely. But the alternative is nothing. Do you really think nothing is better than something experimental? I agree that the Javascript runtime seems like a huge attack vector. But is there a particular side channel attack you have in mind that will make the user less safe than uploading the file in plain text? Can we spell out a specific scenario in which this will be less safe for a user than not doing any client-side encryption whatsoever, for users who are running JS?

Taipo commented 11 years ago

Can someone explain to me how uploading a file across a TOR connection can be considered uploading a file in plain text? Unless at some point there is a plan to allow SecureDrop to be hosted on a non-Hidden Service hosted webserver? What have I missed here?

garrettr commented 11 years ago

@Taipo The point is that the server, a 3rd party between the source and the journalist, receives the file as plaintext.

Taipo commented 11 years ago

@garrettr Yes I got that bit thanks. Issue #14 and #99 I believe are covering that. I guess the issue here then is not uploading the file in plain text but using client side encryption as a means of preventing the file from being received in plain text therefore exposing the server to potential plain text file recovery attacks should a SecureDrop location be breached by an adversary.

The advantages to not having javascript required is that either a 3rd party person or application, or even a cautious source can use the warnings of the TBB inbuilt NoScript extension to notify them of the presence of javascript on a SecureDrop ( where it would not be expected to be present ). Similar to what @Hainish brought up earlier.

This is similar to what we saw yesterday with the Forbes SD #108 where their intro page is sending third party cookies, and was picked up on by an observer, this NoScript alert method would be a rather simple method for anyone concerned to verifying that to a certain degree, a SecureDrop had not been compromised or run in a compromising manner as we saw with Freedom Hosting and probably lots of other non-TOR websites.

Comparing this issue to #14, it may come down to a weighing exercise between #14 s issues and what is lost by requiring javascript and what security risks are added in by making it mandatory for a source to have certain extensions installed in order to send encrypted files ( see my comments above about the example of FOXACID ).

ABISprotocol commented 11 years ago

lurking

fpietrosanti commented 11 years ago

@Taipo using Javascript encryption will provide additional protection against #14 and against #99 practically defeating memory sniffing, by providing an added layer of unauthenticated encryption.

Tor do provide anonymity, but for the confidentiality it provide unauthenticated link-layer encryption (end-user have no way to know that he is connecting effectively to that specific leak site).

Introducing another layer, application level, of unauthenticated encryption do help.

Layered encryption with different technologies (but authenticated) is also a solution approved by NSA for classified communications (SIPTLS/SRTP + IPSec at http://www.nsa.gov/ia/programs/mobility_program/) .

So, due to the unauthenticated nature of link-layer-security offered by Tor, it could be argued that uploading a file over Tor can be assimilated to plain text.

Obviously that's a purely theoretical discussion that we should not rely on Tor for confidentiality because is unauthenticated encryption with no user-verifiable method to know to who he is connecting to.

Taipo commented 11 years ago

@fpietrosanti

Tor do provide anonymity, but for the confidentiality it provide unauthenticated link-layer encryption (end-user have no way to know that he is connecting effectively to that specific leak site).

I agree that encrypting a file before it touches a source server hard drive is better security.

My question is, using the GPG extension method, can a source still visit a SecureDrop and submit text only information ( where no files are submitted ) without having to have javascript enabled ( without the server requiring javascript in order to at least complete that function )?

Layered encryption with different technologies (but authenticated) is also a solution approved by NSA for classified > communications (SIPTLS/SRTP + IPSec....

Yes using any form of encrypted methods of communication that employs the MS-CHAPv2 handshake will receive the approval of the NSA for more nefarious reasons than honourable.

fpietrosanti commented 11 years ago

@Taipo The advantage of JS encryption is not only to prevent cleartext file and text to touch the server hard drive, but also to enter into the server RAM memory, thus defeating most RAM memory related attacks.

Regarding NSA certified crypto for governmental uses i think that's very high quality and that a Common Criterial EAL4 certificate secure communication device is more secure than any kind of opensource community driven one. I've worked in making COMSEC crypto and the kind of quality/security process that you have to follow will never existing in opensource environment where everything is driven by enthusiasm and not by "militarized". procedures

garrettr commented 11 years ago

@Taipo @fpietrosanti Last few comments are starting to get tangential (MS-CHAPv2?), let's keep this discussion focused please :smile:

@fpietrosanti Excellent points about client-side crypto solving #14 and #99 . For me this is the most compelling reason to consider it further.

However, your points about Tor and authentication are incorrect. First, Tor Hidden Services do provide authentication - the whole point of .onion addresses is that they are self-authenticating, because they are the hash of the hidden service's public key.

In real life this ends up being part of a trust chain, e.g. you got the .onion URL from the New Yorker website, which uses HTTPS and is authenticated by a CA. There are some neat alternatives that we can take advantage of (especially if you don't have much faith in the CA-based PKI), like printing the .onion address in physical copies of the New Yorker so they can be verified by a cautious source. The New Yorker has expressed interest in doing this.

Even if your premise were correct, client-side encryption would not solve the authentication problem. The site would still serve a public key for the client's browser to encrypt to. How do they establish trust in that key?

garrettr commented 11 years ago

A Critique of Lavabit raises some excellent points about the flaws inherent in our current architecture, and underscores the need for a solution to this issue.

Our situation is not as bad as Lavabit's, for a number of reasons:

  1. Our sources are required to connect to us over Tor and we do not collect any information from them, so it is nearly impossible (not totally impossible due to potential identifying metadata in submissions) for the source server to identify an individual whose communications a subpoena might be served for.
  2. Since individual sources cannot be identified/subpoenaed, the only choice remaining is to subpoena a journalist or journalist organization's entire set of communications with all of their Securedrop sources. This is legally nontrivial.
  3. Tor provides perfect forward secrecy.
  4. We do not use CA's, but self-authenticate based on the .onion address of the HS. Therefore MITM is only possible via some yet-to-be-discovered vulnerability in Tor or RSA.

Nonetheless, this approach is fundamentally flawed (as others have stated in this thread) because the source cannot confirm/audit the server's promise of encryption.

nadimkobeissi commented 11 years ago

+1 to @garrettr's post above.

Taipo commented 11 years ago

@garrettr Part of the reason this is an issue with SecureDrop vs any other hidden service hosted webserver is that there is a higher chance that a media agency is going to host these servers at their offices where journalists work rather than in an undisclosed location therefore making finding the location of the SecureDrop server a rather trivial affair. Although hiding the location of a webserver should not be the primary method of protecting sources from being caught in a MITM attack, it is none-the-less part of the protection that using TORs HSDir offer against state level adversaries.

Secondly the legal argument is only really currently applicable in a few countries ( probably still relevant in the US ). In many other countries there are no such legal protections or any forewarning of impending attacks except for the level of forewarning one has when ones offices are raided. In those instances, physically taking over a source server and using one of their javascript exploits available on the exploits market would be the more efficient method of capturing the sources location - capturing the source being the primary focus of any state adversary ( I am not suggesting that this cannot be achieved by any captured content submitted by a source ).

Thirdly, carrying on from the point @fpietrosanti pointed out about 'horror stories' concerning source 'operational (in)security', how do you construct a method of providing a source the ability to determine the authenticity of the encryption when the great majority of sources would not even be savvy enough to use PGP, check signed content, see warning messages and pop-up boxes as annoying things to click to bypass, that sort of thing.

trevortimm commented 11 years ago

All the points, positive and negative made by @garrettr, are good, but I want to point out one more, important difference we have with Lavabit.

Lavabit is a third party email provider. So it's one person communicating to a second person through a Lavabit. In our situation, there is no third party: the journalist is essentially both the provider and the communicator. This is important to keep in mind for two reasons: 1) the journalist, in addition to having 4th Amendment protections and enhanced news organization protections under the Privacy Protection Act, also may have 5th Amendment protections that a third party would not have; 2) The journalist will obviously always have access to the messages sent their way regardless of giving up private encryption keys of the source since they are the recipient on the other end.

Now, it's possible the government could compel the IT department instead of the journalist to get around 5th Amendment protections. But again, in the US this would be unprecedented and turn into a giant legal fight that the government is likely they will try to avoid at all costs.

I agree with @Taipo that these legal protections are much weaker or non-existent in other countries. And as part of our best practices guide, we're going to make this very clear. It's also why we've been concentrating on US organizations until this problem is solvable, and recommend having mitigation strategy before implementing SecureDrop. In fact, we talked about this problem in our first post on the security features and potential vulnerabilities: https://pressfreedomfoundation.org/blog/2013/10/how-we-plan-keeping-securedrop-secure-possible

One solution we've considered for the long term is hosting SecureDrop servers in the US for people in at risk countries that have little or no chance of being subpoenaed by the US. But there are other risks to that we have to work through and we need to have a staff solely devoted to this end. (Perhaps will partner with another organization on this.)

Hainish commented 11 years ago

@garrettr nice summary. I would add that we have never claimed that someone with access to the server does not have the ability to decrypt messages sent to the source.

Additionally, separating the public and private keys of the journalist with an airgap does ensure that previous messages sent to the journalist can not be decrypted by a third party, provided that the journalist was running an actual instance of SecureDrop. But if the source server was modified it could intercept messages and documents sent to the journalist in the future.

By moving the SecureDrop application to the client and verifying in a non-fingerprintable manner (e.g. as described here https://github.com/freedomofpress/securedrop/issues/92#issuecomment-26857005) the client code being delivered (HTML and Javascript), we can almost make a major assurance to the security of the submission process. The missing piece is that the journalist key will have to be delivered by the server, which is separate for each instance of SecureDrop. So as well as verifying the code, verifying the public key is important as well. Having some kind of service doing this in an automated fashion at a unpredictable interval goes a long way in ensuring the security at the point of submission.

OpenPGP.js is getting better as @fpietrosanti has pointed out, and with TBB moving to FF 24 ESR with an OS-level PRNG API we'll have the chance to implement this in a workable way. I think we should move in this direction (even if we don't wind up adopting it immediately, and I understand the concerns with doing so.) I advocate starting to develop this in a major feature branch.