irungentoo / toxcore

The future of online communications.
https://tox.chat/
GNU General Public License v3.0
8.74k stars 1.27k forks source link

Add authentication to prevent MiTM attacks #1180

Closed Jookia closed 9 years ago

Jookia commented 9 years ago

Currently Tox only prevents against MitM attacks between users if it can be guaranteed that they each have the right Tox ID added. The Tox ID has to be exchanged over a probably insecure channel, which could easily just switch IDs around. This was talked about in IRC and while there's no real world case against this in Tox, here's an example attack:

I tell someone my ID over Twitter. They add it. We're MitMed because their ISP was compromised. I have no way of knowing this on my end. This is analogous to trusting certificate authorities to give you valid site certificates.

Next time we tell each other's IDs over IRC so we can check better. Unfortunately their insecure IRC is attacked by their compromised ISP and changes their ID recieves and my ID recieving. We're MitMed. I haven't seen an attack like this in the wild, but I can easily imagine it happening in email, especially as more and more people move to PGP.

We tell each other's Tox IDs over phone which we assume can't be MitMed due to voice prints. No MitMing can happen.

If Tox is meant to be passed as private messaging, there needs to be a better way to ensure it actually is private by authenticating people, and in a friendly manner.

I suggest friend requests are turned in to two-step procedure, by adding a shared secret and required message from both ends.

Alice would send a friend request, asking Bob when they first met. Bob sends his answer back, along with a question of his own, asking her which animal he's feared most. Alice verifies the answer and either sends back a rejection, or an approval and an answer to Bob's question. If Alice gets the question right, they add each other as they have the correct IDs.

nachfuellbar commented 9 years ago

I don't think this is userfriendly You could ask someone exact these questions after adding this contact (which would be as secure as your suggestion)

Jookia commented 9 years ago

After adding the contact, it's pointless if you're MitMed. You want to make sure you've added the right person before adding them or telling them secrets.

nachfuellbar commented 9 years ago

You are right but if somebody would use a man in the middle attack, they could also fake your requests and forward your answers + questions Also you can easily delete each of your contacts

Jookia commented 9 years ago

MiTM attackers can't forward answers (which is the key here), as they don't have your ID, only theirs. I wrote this in IRC:

(04:15:23 AM) Jookia: SylvieLorxu: This isn't about the connection, it's about public keys (04:15:27 AM) Jookia: Look at it like this (04:15:37 AM) Jookia: You have A(pubkey) -> B(pubkey) -> C(pubkey) (04:15:48 AM) Jookia: You want to know that you've added C's pubkey, not B, and C wants to know that in reverse (04:16:02 AM) Jookia: B can't encrypt the secret using its pubkey as it doesn't know it, only C does

In the end, the secret will only be associated with my ID, not the MiTM.

Edit: Thinking about this, you could add this kind of authentication as an optional feature, done after adding. But would users use it?

Jookia commented 9 years ago

After talks in IRC, I'm convinced this wouldn't add any additional security given that key exchanging is a science in itself. With OTR the problem isn't MitM, but verifying keys to get around the fact you don't know the keys.

SafwatHalaby commented 9 years ago

@nachfuellbar

You could ask someone exact these questions after adding this contact (which would be as secure as your suggestion)

Wrong. The attacker could be performing a passive mitm - Listening to the conversation but allowing it to pass through, that way you'd be able to authenticate your body but the conversation is still compromised.

nachfuellbar commented 9 years ago

@wiseoldman95 I never said it would be really secure; just as secure as his suggestion

iShift commented 9 years ago

solution is - get tox id from blockchain based network, for example - twister.

SafwatHalaby commented 9 years ago

On 12/02/2014 06:46 PM, nachfuellbar wrote:

@wiseoldman95 https://github.com/wiseoldman95 I never said it would be really secure; just as secure as his suggestion

— Reply to this email directly or view it on GitHub https://github.com/irungentoo/toxcore/issues/1180#issuecomment-65261859.

Oh, I see, thanks for clarifying. But that's not entirely true either. When naively implemented, you are indeed right; a "secret question" is not a good authentication mechanism. For instance, if we create a normal conversation via a normal handshake and then send the "secret question" normally, just like we would send a chat message, then it is prone to mitm.

However, when the "secret question" is bundled cleverly with the handshake of the protocol, it can become truly secure. XMPP's OTR is a good example of this. The protocol has three methods of authentication: key checking, secret question, or a shared phrase.

Here's an example of a true authentication and encryption achieved with only a shared English phrase (Warning, do not deploy without testing, this is just straight out of my mind and might have vulnerabilities):

Edit: I can already identify a problem but I don't have time to rephrase my post, this example is NOT secure. I wonder how OTR did this. If anyone knows, please to enlighten me. I will keep the example for anyone interested. (Either OTR's shared phrase auth is not as secure as I thought, or they employ a clever method which I am not able to find myself)

The example:

Bob and Alice meet in real life and choose a shared secret phrase for authentication, say... "banana".

Bob and Alice decide to communicate with TOX, Bob creates a private key and a public key, so does alice. Bob and Alice begin an encrypted but not yet authenticated conversation.

Now is the magic: bob HASHES the word "banana", and then encrypts it with alice's public key and sends the message to her. Alice HASHES the word banana, decrypts bob's message, and compares her hash with the hash bob had just sent. If they match, then Alice now is 100% certain this is bob (or an enemy who guessed the word "banana" but that's very very very unlikely, especially for a less generic phrase such as "Deus Ex Rocks"). Alice has authenticated bob.

Now Bob authenticates Alice in a similar fashion.

Jookia commented 9 years ago

If they've already met in real life they can just exchange keys.

ghost commented 9 years ago

Difference is that one can easily remember "banana" but not the key. I think shared secret is viable approach at least in part. People that used to communicate in the past would have an easy way to verify each other in this case. People that did not communicate in the past - not sure it matters because how can you verify authenticity of stranger.. Although OTR provides fingerprint verification so should i meet this new unknown person somewhere else either in real world or through other communications means we could check each other's fingerprints just to make sure noone is in the middle. Taping one channel could prove difficult. Taping multiple comms channels is orders of magnitude harder. All in all while it wont make it bulletproof it would still add inches to armor and make it more secure. Every bit counts right?

aaannndddyyy commented 9 years ago

@wiseoldman95 : This does NOT solve the MITM problem. With this kind of authentication, all you know is that Bob sent a message to you - whether that message got intercepted or not, you cannot tell.

Alice wants to add Bob and gets a tid from bob@eve.mitm, thinking that this is Bob's tid. Alice adds that tid and sends a FR with "Hi Bob, this is Alice".

In fact, the tid was Eve's tid. Eve knows Bob's tid. Eve receives Alice's FR, decrypts it, gets the message, and sends a FR with that message to Bob. Bob receives it and adds Eve, thinking she's Alice.

Alice, aware of that possibility, wants to authenticate Bob with your shared secret method and asks him to send her the shared secret or as you say the hash of it (we consider only one-way authentication for simplicity's sake). It doesn't really matter if it is the secret itself or the hash of it; it is the same situation and you don't gain anything by hashing it. Eve forwards that request to Bob, who, upon receiving it, sends the paswword or the hash of it, encrypted with what he thinks is Alice's pubkey. Eve receives Bob's answer, successfully decrypts it, and is now in possession of the secret thingy. She now encrypts it with Alice's pubkey and signs with her own priv key, which Alice assumes to be Bob's. Alice receives the message from Eve, checks the signature, decrypts the message, obtains the plain text answer, checks it and sees the answer is correct. Alice assumes to have authenticated Bob, and that the connection is secure. In reality, though, they are MITM'd.

So this does not provide security.

Is there a solution??

yes, there is.

And it involves hashing. But in contrast to the above scenario it involves useful hashing. Above the hashing didn't do anything. If we analyze the situation, we see that just passing on a secret, encrypting it to a random pub key and signing with another random privkey, does not accomplish authentication.

But what can be done is: tie the secret answer to the sender's pubkey. What Alice receives will NOT be ONLY the secret answer (which Eve can have passed along), but a combination of secret answer AND sender's pubkey, tightly tied together. So if Bob is the one who knows the secret answer, and he ties the answer to his key, then Alice will not only check whether the secret answer was correct, but also if the key she added is the one Bob tied to his answer, or the one of Eve.

To accomplish that, Alice, who wants to authenticate Bob, sends Bob a long random nonce in an encrypted and signed message. Bob then receives that nonce and uses it together with the secret to compute a code:

authCode = hash( [secret XOR Bob's pubkey XOR randomNonce] + [Alice' tid without nospam] ) where + means concatenation

The authCode is then encrypted and signed, and sent to Alice, who checks it.

That's it.

Let's take a look at the different scenarios: case 1, no MITM: Alice receives the encrypted answer, decrypts it and obtains the authCode. She also knows the secret, so she performs the same operation as Bob did. She then compares her result to the received authCode. They match, so Bob is now authenticated. Same procedure now with a different nonce and swapped roles, and Bob authenticates Alice.

case 2, MITM: Eve receives the encrypted answer from Bob, decrypts it and obtains the authCode. Since she cannot reverse the hash, she cannot obtain the secret and hence cannot bind it to her keys. She encrypts and signs the authCode and sends it to Alice. (If she hasn't already given up, that is.) Alice decrypts and checks signature, which is, of course, valid. Alice now computes the hash as outlined above, with what she thinks is Bob's id, but in fact is Eve's. Since Eve's id is not the same as Bob's, the hash does not match the received authCode. Alice knows she is MITM'd.

case 3, Bob == impostor: Bob does not know the secret, Alice notices.

case 4, Alice is in fact Anne who only wants to harvest Bob's and Alice's secret: Anne sees authCode received from Bob, but since she cannot see the secret, she cannot calculate a valid authCode to send to Bob, or to Alice in order to authenticate herself.

This method provides an EASY way (compared to comparing fingerprints) to authenticate one another. It relies on the difficulty of the hash algorithm and the quality of the shared secret (passphrase), and assumes that both parties have previously exchanged a secret in a secure way (they met each other, like in your banana example). It thus closes the securitiy hole introduced with toxme.se without doing away with the convenience it offers.

Limitations: This method does not provide a way to authenticate strangers. If you never met RMS and you find a tid which supposedly is his, you cannot use this method for authenticating him. But this is fine. In that case you'd have to rely on PGP or other methods, which is totally fine, though, given that his mails and commits are signed that way too. And this does not require any changes to toxcore, it can be done manually.

Addition:

This scheme works equally well with in question-answer mode. Alice asks Bob a question in authentication dialog, Bob answers. Alice decrypts the received message in order to obtain the hash, and compares it to the hash she calculates just like above but instead of using a shared secret, she uses the expected answer. Note, however, that shared secret is easier as you know beforehand about capital letters and such. But for friends you have not seen for long, Q&A is useful. In order to leverage this I suggest to include something like this in the Q&A auth dialog: "Please note that capitalization and punctuation matter. You can already include in your question the instruction to use only lower case letters, write out numbers or not use punctuation", or you simply filter out spaces, and punctuation and convert all to lowercase, which would be easiest for the end user.

Does that add additional complexity? Yes.

Is it much? No.

More than an average non-ttech user can handle? Definitely not.

Tedious? Not really, as it is only one click and one passphroase and you would have to enter only once per contact.

A Hash? What about Length Extension Attack? This attack is not an issue here. (First of all, we have fixed length of keys which makes the attack impossible. Secondly, even a successful attack does not allow changing the existing message, but only to add to it.)

There are other methods of securely authenticating each other in a similar way. Yes, but this is the simplest one imprementationwise, and yet secure.

This could be done in toxcore so that there are no incompatibilities between clients. The actual amount of additional code is minimal.

There could be an authentication request, stating which method is used (shared secret or Q&A) and in Q&A case the question is included, in the case of shared secret, authCode is included. The authentication reply contains authCode, calculated either from the shared secret or the answer. Both packets will, of course, be encrypted and signed. Full mutual authentication requires one message from each party in shared secret case, and two in Q&A mode.

ArchangeGabriel commented 9 years ago

@aaannndddyyy +1000. I think this is how it should be done. Maybe this issue should be re-opened?

Jookia commented 9 years ago

The only place you can exchange a pre-shared secret is through a secure channel. So why not just exchange IDs there?

aaannndddyyy commented 9 years ago

Because ID's are long and hard to remember or write down. If possible use QR code. You often see your friends so it's easy to tell them your toxme.se address personally, easier than telling them a long toxid.

ghost commented 9 years ago

Pre-shared secret is also good for people who know each other for a period of time. There usually is an inside joke or something similar which is known only to those people but not to outsiders. Provided system works much like OTR where you can type-in question and other party can see it you dont even need to agree on pre-shared secret because question like "Where did you leave your pants?" is immediately obvious for other person who had this past experience and left his pants in the park for some reason.

This of course wont work with people you do not have prior knowledge of but there is zero reason to tell your secrets to random stranger in the first place.

aaannndddyyy commented 9 years ago

if users are interested in this feature, maybe the issue should be reopened, or I could make a new issue with this proposal. If there is no need for such an authentication method and users do not wish to have it, then no reopening and no new issue for it.

ArchangeGabriel commented 9 years ago

I prefer Socialist millionaire in its QA implementation over ZRTP, both require you to know the people you’re adding to your friends, but the first one is working without call. And I think this feature is necessary, and everyone using Tox should understand why it’s here and how to use it correctly.

tttom commented 9 years ago

Socialist millionaire and ZRTP are not mutually exclusive and I don't think it would harm to have both verification mechanisms. No matter what is implementation is chosen, it will make Tox more secure.

Aside from the boon to security, I think this may prove helpful in solving several of the other issues people have raised here before: Discovery of users: #1222 and multiple devices per user: https://wiki.tox.im/Multiple_Devices, #843, and #1100.

srkunze commented 9 years ago

@aaannndddyyy I think a new issue with a succinct description of what you recommend would help a lot to find the most important piece of data up-front for developers.

Linking this old thread is also possible.

aaannndddyyy commented 9 years ago

@srkunze: I wasn't sure whether it was a desired feature. As you expressed interest in it, I will soon open a new issue with this very proposal.

srkunze commented 9 years ago

@aaannndddyyy From what I gather, this would complement #1222 perfectly and provides an even better feeling of security.

The only fear I have is that people might use too easy secrets that can be bruteforced.

aaannndddyyy commented 9 years ago

bruteforce is not a big issue as this is not automated. Every single time a user fails to authenticate, you get notified. that means even if you take words like butter, the chance that it is one of the first three words the attacker tries is infinitely small. it's not like an automatic key that the client presnets to your client as often as he wants, thousands per minute after he finds the matching key.

srkunze commented 9 years ago

Every single time a user fails to authenticate, you get notified.

Why is that?

aaannndddyyy commented 9 years ago

It's a dialogue. It's not a nasty thing you have to do everytime you want to chat with your buddy, but rather only the first time, after you got the id from any lookup service, so that you can be sure it is your friend's id. You can then, at your own discretion, always re-authenticate him, whenever you want, e.g. if you think someone else gained access to his device. The other user should get a notification that an authentication is requested and by which method. If it#s Q&A the question will be shown, and he answers. The computer reply will then be sent to you and your tox then validates it, notifying you if it was a wrong answer.

srkunze commented 9 years ago

Both sides are required to be online.

srkunze commented 9 years ago

Answers should be treated case-insensitive. :D

aaannndddyyy commented 9 years ago

yes. both sides need to be online. it's like when you start chatting, then also both need be there. So to say it's the first messages that are exchanged, and they serve for authentication. Re case sensitive or not: There I'm passionateless. Normally it's dumb to restrict a passphrase in such a way, and a shared secret is like a passphrase. But then again, there is usability, and users that answer to a question, sometimes write names or the start of a snetence with a capital letter, others all lower case. As brute force is not an issue, it's not a very big loss ifyou treat them case insensitive. but that's just my opinion. Maybe someone else would insist to have upper and lower case as well as punctuation and special characters.....

srkunze commented 9 years ago

Does it need to be mutual? Sounds to me that people would assume this to be a symmetrical process (at least this is how I see it from an Average Joe position). Thus, if I confirmed you, I am confirmed for you as well because we share a common real-world secret such a the color of my hair.

aaannndddyyy commented 9 years ago

that depends. in the shared secret case yes. the secret is the same for both of you and you need to enter it anyway. the exchanged hashes will differ, of course. So the hash you send to your friend is a different one from the one he sends you.

In the Q&A case I'd vote against it, and that there be a personal question for each of them. The user need not a priori know if this is requred or not, because he shouldthen automatically be asked "You should ask your friend to authenticate, too. Do you want to do that now? (Y/n)"

srkunze commented 9 years ago

I foresee that people might have difficulty to think of really secure questions. :D At least, I have difficulties. Especially, when I need a personal one for each of my contacts THAT he has to MATCH exactly.

Sitting on the same couch and comparing the last four digits of your ToxIDs is easier and faster. ;)

Just saying, but maybe there are some use-cases people can cover with that.

srkunze commented 9 years ago

Maybe, based on this technology, clients can provide a "Generate secret"-Q/A where people just can each other via phone or so and exchange the 6 random digits.

Forget about it. Exchanging last digits of ToxID or using QR code is even easier.

aaannndddyyy commented 9 years ago

comparing last 4 characters is is not secure. Guess why tox id's are not only 4 characters. On the phone you can simply compare entire id, and you are done too. then no need to run the autentication dialogue.

srkunze commented 9 years ago

@aaannndddyyy If not 4, than 6. That is more convenient than the complete ID (how long is that?).

What you propose sounds like Bitcoin mining (in Bitcoin you need part of the hash to be less than a constant). Here, you required it to be equal.

If you want to make it "mining"-secure, do not let people compare last 6 digits of ToxID, but 6 digits of 3000 rounds of sha256 of ToxID.

tttom commented 9 years ago

Comparing just a few digits is never gonna be secure, and you don't need to, you can:

The few remaining users that don't fall in the above categories could still:

I am not a dev, but I don't see any disadvantages here. Happy to hear any counter arguments though.

aaannndddyyy commented 9 years ago

no counter arguments here, except for that otr is another protocol which is not (yet) available on tox. And it should not be a client side plugin, as it must work with all tox users no matter which cleint they use, otherise you lose flexibility. this is why I advocate for either implementing otr authentication system (socialist millionaires) or the above outlined HMAC approach.

aaannndddyyy commented 9 years ago

ZRTP is nice, but relies on you knowing the voice of your friend. what about voice imitators? I'm not opposed to ZRTP, but rather as one option whereas the above or otr would be the other option. Paranoid users could even run both methods. And as none of them os mandatory, a user who doesn'T care about them or added contacts via qr code does not need to bother running them. re bruteforce: " A 16-bit SAS, for example, provides the attacker only one chance out of 65536 of not being detected." (from https://en.wikipedia.org/wiki/ZRTP ) and there is https://github.com/traviscross/libzrtp

srkunze commented 9 years ago

@tttom There is one thing that we need to remember: nobody and I really mean NOBODY will EVER check the entire key manually. Maybe, I am exaggerating but you get my point. That simply is not an option at least not for 99.99% of human beings on this planet.

Could somebody explain to me why comparing some digits is a bad idea? I find it simple and easy to do.

I would like to see how often such collisions might occur. Especially, how often one must try to generate a ToxID that exactly matches the last n human-readable digits. And even more, how it takes when we do not compare the plain ToxID but 3000 rounds of sha256 hashes. I would say, that increases the time for a collision significantly.

Given that we even find a way to change ToxID (moving to another ToxID because of whatever reason), really makes this type of attack useless (at least IMO).

ArchangeGabriel commented 9 years ago

Look at PGP. People used to compare only the last 8 digits, but that’s not enough, it’s very easy today to generate in a very little amount of time the wanted digit. Just look at Sam Hocevar two latest keys for instance: http://pgp.mit.edu/pks/lookup?search=sam+hocevar&op=index

And in fact, I even remember to have found an article on generating very similar full GPG ID (that’s 40 hex digits) to fool people looking just at the resemblance of the full ID.

So, if you want to be careful, you need to check the whole ID digit by digit, and that’s what we do for PGP. So I agree that you’re exaggerating, but also agree that most end-users won’t check that.

Sure, comparing 8 digits after 3000 rounds of sha2 would be better, but only for some time, given that you also check the 8 digits before, else it’s just 3000 time harder to fool you. Which is not enough for me. But in some future, it might not be enough, and calculating 3000 rounds of sha2 each time you want to check seems too much for me.

I think the best is to implement all of the available methods, and explain user which one is the best given how they know the other people. If you know their voice, go for ZRTP. If you know them sufficiently enough to have a specific question, go for that.

If you able to meet IRL and want to exchange something simple, give a QRcode of your ID, or share a secret a this time and use socialist millionaire.

However, I must disagree on checking bits (even fully) by phone or any other unsecure channel: you might already be MITM’d there, and then, you’re fooled.

srkunze commented 9 years ago

Hmm. You may be right. But 3000 is not fixed. It is up to the clients to calculate the hashes. So, new clients simply use higher number of rounds.

And, it only needs to be done once as your client can save your hashed ID.

Well, anyway, I agree on implementing several methods of authentication.