Open dolanjs opened 10 years ago
This is a problem if an attacker steals the session cookie, but that seems unlikely since they're session-based and HTTPOnly.
In all of the cases I can think of where an attacker has access to the cookie, they also have access to the page's HTML, which contains the codename in plaintext anyway. Could be wrong though.
I agree on the codename in HTML, yet think that cookies should never expose more than is actually needed. The HTML is only shown in the browser, unlikely to be cached anywhere. The cookies reside on the hard-disk and will be transferred via HTTP headers - this significantly extends the window of exposure.
Additionally, I don't see any reason for example to transport the CSRF token in the HTTP header.
@x00mario thanks for the clarification.
In that case, the problem isn't that the cookies leak server-side session values, but rather that we use the persistent codename as a session authorization token instead of an actual session value that expires at the end of the session.
In that case, the problem isn't that the cookies leak server-side session values, but rather that we use the persistent codename as a session authorization token instead of an actual session value that expires at the end of the session.
@diracdeltas That's exactly right.
Given the current design for encrypting replies, I don't see how we can avoid storing the codename in the session cookie (otherwise, the user will need to re-enter it whenever they get a new reply, which would be confusing). We could potentially avoid this issue by making the cookies expire after a short interval (like banks do). Or we can detect if a user is logged in and has received a new reply, and ask them to log in again. I don't think this significantly improves security, and significantly decreases usability.
Additionally, I don't see any reason for example to transport the CSRF token in the HTTP header.
@x00mario The goal of CSRF is for an attacker to leverage an existing user's authenticated status to perform unintended actions on their behalf. The point of the token is that an attacker cannot guess it. It is stored in the cookie because otherwise we would have to store state on the server to keep tracking of sessions and their current CSRF tokens, which is unnecessarily complex.
Note that storing CSRF tokens in (signed) cookies this way is quite common, and is safe if the connection is end-to-end encrypted (it is because the source site is required to be a Tor Hidden Service). An XSS attack could also be leveraged to steal the token, but
The cookies reside on the hard-disk
@x00mario Session cookies are not saved to disk by definition, and that is one of the reasons we use them. See Firefox's implementation.
Of course this is not a guarantee. It is implementation-dependent, and the cookie could be saved to disk by other mechanisms such as OS swap (although the codename could be swapped in other circumstances too, so I'm not sure if we guarantee to avoid that). Given this, however, as well as the difficulty described above in decrypting replies given the current design (which requires the codename to decrypt the private key), I think it's worth it not to change this for now.
It could be worth encrypting cookies in the future to reduce potential leakage of the codename on the source's computer, but to do so would not be meaningful without also removing the plaintext codename from the reminder message in the DOM, and I'm afraid that could disproportionately impact usability. Since the security risk here is very low (for the reasons discussed above), I'm bumping this to 0.3.
Another option is to use randomly-generated session tokens instead of the codename for authentication in the cookie. We would have to have some server state that maps session tokens to source ID.
IMO this is better than encrypting the codename and putting it in cookies because session tokens are ephemeral.
Also, there could be some security benefits to having the codename displayed on the page. Ex: an active MITM overwrites the source's cookies [1] with his/her own cookies, so the source is suddenly logged into an account that they don't control. That would be extremely dangerous.
[1] Not sure if the Tor Hidden Services implementation prevents cookie-overwriting attacks.
Another option is to use randomly-generated session tokens instead of the codename for authentication in the cookie. We would have to have some server state that maps session tokens to source ID.
In terms of avoiding leakage that is the best solution, but we still have to contend with the fact that we need to store the user's "reply key" passphrase somewhere. Currently the plaintext codename is stored in the cookie, and the passphrase is re-derived from that as needed. If we want to stop storing the codename on the cookie, we have two options:
Option 2 is IMO marginally better than what we do now, but I also think there are more important vectors to consider.
Also, there could be some security benefits to having the codename displayed on the page
I'm not sure if I understand your example. An active MITM attacker could change cookies but maintain the displayed codename, so showing the codename would not help a user detect such an attacker.
Also, such an active MITM attack would only be possible through a catastrophic vulnerability in Tor (at which point, there are probably easier avenues of attack).
An active MITM attacker could change cookies but maintain the displayed codename, so showing the codename would not help a user detect such an attacker.
That's not necessarily true; an active MITM needs both read and write access to the connection in order to modify the DOM, whereas an active MITM only needs write access in order to overwrite cookies (without seeing them) and do a session fixation attack [1]. :(
But I agree that this would require an exploit in TBB so it's not worth worrying about for us.
[1] Attack scenario would go something like this:
Moving this to 0.4. While I still think the exploitability of this is very low, it would be worthwhile to either encrypt the cookie or use ephemeral session ids as a defense in depth countermeasure.
I'm working on EtM scheme for client-side session based on AES-CTR-128 and HMAC-SHA-224, since IMO that's the best option provided by our current crypto toolkit, PyCrypto. I'd love to at some point switch our crypto over to libsodium, but that's another story.
Each boot (currently every 24 hours), a new encryption and authentication key for both the journalist and source app is generated and stored in memory. This provides a mechanism to invalidate client-side session, without the need to add additional attack surface to the application server via an in-memory key/val store (we already have Redis server installed because we use Redis workers, but we plan to deprecate our Redis dependency). Granted, the client-side session, if it somehow persists after clicking the logout button or quitting TB, is not really invalidated until the server reboots, this is still better than an infinite lifetime.
This also has the effect that if a cookie is obtained from a source's machine (e.g., it somehow persisted in memory, or was swapped out to disk by the OS) after the next server reboot, not only is it invalid for login, but it is also impossible to tie the cookie to any encrypted submissions if the app server is later compromised (or unencrypted submissions if the SVS key is somehow also compromised). It is plausibly deniable the source even visited SD.
Before getting too excited about any of that though, we should recognize two things:
“Dammit, they used a deniable key exchange protocol” said no Federal prosecutor ever. --- https://blog.cryptographyengineering.com/2014/07/26/noodling-about-im-protocols/
The implementation will work the standard way: override flask.app.session_interface
with a custom subclass of flask.sessions.SessionInterface
.
@justintroutman asked: why SHA-224? The answer is that (i) SHA-224 is considered totally safe, and (ii) it gives us more wiggle room than SHA-256 in terms of a potential non-deterministic website fingerprinting defense that might pad client session. 4B room might not turn out to be significant in terms of expanding the number of sites SecureDrop can be morphed* to (and really this is true if you consider Tor cells are padded to 512B), however, we have not yet shown this and my intuition is to go this way because it's costing us nothing in terms of crypto security. This is also why CTR mode was chosen over CBC, which requires padding.
* Read the ALPaCA paper to understand the term morphing here--essentially it's a method of disguising a web page as another web page, or, more accurately speaking, a probabilistically very average appearing (from the view of the passive observer) site. Read this circuit fingerprinting paper to see why we're so concerned of the potential impact of website fingerprinting attacks on Hidden Services. And, checkout the work we're doing to help evaluate and tackle this problem.
So it's come to our attention that we need to replace pycrypto, the progress of which is pretty much stalled (e.g., this year-old vulnerability has still not been fixed in a release, even though it's patched in master--note this one does not effect SD). I wrote most of the code that would resolve this issue as I explained in my last comments (you can check it out at https://github.com/freedomofpress/securedrop/blob/safer-sessions/securedrop/session.py), the only major thing I hadn't figured out yet was how best to share counter objects between Flask threads.
Anyway, since this issue is not high-priority, I figure I might postpone finishing up that branch until after we've replaced our crypto library. Ideally, we'd use https://github.com/pyca/pynacl, which bundles libsodium
, as a replacement, which would allow me to avoid resolving the synchronization problem as I would definitely use CHACHA20/Poly1305 instead of AES-CTR.
So I'm moving this one to "blocked" in our recently made Kanban-style project board for the 0.4 milestone. Check it out if you haven't: https://github.com/freedomofpress/securedrop/projects/1.
Noting that this is still an issue:
ss
cookiezlib.decompress(base64.urlsafe_b64decode(cookie_str+"="*(-len(cookie_str) % 4)))
-> result contains codename.Likely low priority given that cookies are cleared out after a SecureDrop session (we even strongly recommend "New Identity" after the session), but still worth addressing.
Closed in error by #6403 ref.
The configured secret key, which is needed to use the flask session feature, is not used to encrypt the session values in the cookie, but only to sign them. This means that the cookie can be decoded to show the values in plain text.
PoC:
There are two ways to mitigate this leak: One is to implement an own session interface that uses encryption for the (un)serialization and pass it to the app via the app.session_interface config. The second method is, to only store a session id in the cookie and create a server-side session store with files or a database.
Reported by crew53: SD-01-012 Flask cookies leak (server-side) session values