mlswg / mls-protocol

MLS protocol
https://messaginglayersecurity.rocks
Other
232 stars 60 forks source link

Trivial DoS by a malicious client #21

Closed katrielalex closed 5 years ago

katrielalex commented 6 years ago

At the end of the London meetup we discussed a potential problem: malicious clients can send a bad copath (e.g. just random group elements), which will totally screw up the group state of anyone who processes the update. This was not considered a valid attack in ART, which does not consider malicious clients or DoS, but is more of a problem here. In particular, one can imagine a group with the convention "invite anybody, delete people who misbehave" (e.g. like many IRC channels).

There are a few ways to handle this:

cascremers commented 6 years ago

I think as Richard suggested in the London meeting, there may be a relatively simple way to leverage log N zero knowledge proofs here. From the top of my head, at the lowest level, the thing to prove is that a key on the copath is in fact g^(a b') for a secret new b' and known g^b' and g^a, without revealing b'. If we can make this work, it probably can be made to work for the whole copath.

Bren2010 commented 6 years ago

Since the model for TreeKEM is "replace the public key for node X by encrypting the new private key to one of the child public keys", you could prevent malicious key updates with publicly verifiable key escrow. The Delivery Service would verify the escrow and could then discard the proof -- users don't need to check it.

There's a construction for non-ECC Diffie-Hellman here. I was also looking at NTRUEncrypt and you could probably do key escrow like this:

  1. Encrypt the new private key (f, g) to public key h in two separate ciphertexts: e_1 = r_1*h + f and e_2 = r_2*h + g and publish the new public key: h' = p * f_q * g
  2. User provides Pedersen commitments on an Elliptic Curve to the coefficients of r_1 and r_2.
  3. User proves that the commitments are all to small values, -1 / 0 / 1.
  4. DS can use knowledge of e_{1,2} and h to derive commitments to f and g.
  5. User proves that these commitments are all to small values as well.
  6. DS derives a secondary commitment to g by following g = f * (1/p) * h' with its commitment to f.
  7. User proves that the two commitments to g are equivalent.

Second encryption of the same private key to a different public key:

  1. User provides Pedersen commitments on an Elliptic Curve to the coefficients of r_3 and r_4.
  2. DS uses knowledge of e_{3,4} and h_2 to derive commitments to f and g.
  3. User proves that these commitments are the same as the first commitments to f and g.

I think proof size is around 200-300kb. Pedersen commitments are computationally binding and perfectly hiding, so I can't currently forge a proof and a future quantum computer can't use the commitments to decrypt the private key.

grittygrease commented 5 years ago

Another option that was discussed at the second interim is to use message franking to achieve a weaker property: the ability to prove to the server and other group members that a bad update was sent retroactively. This doesn't allow the server to block broken updates from being distributed but does allow a broken member to report that the group has been split. This has the downside that there is no good general recovery strategy other than tearing down the group.

Note: This would only protect against KEMing the wrong secret key to the correct public key (The DoS scenario), not KEMing to the wrong public key.

bifurcation commented 5 years ago

On further discussion, two options:

In either case, the reporting party reveals the (incorrect) secret that was ECIES encrypted to it.

grittygrease commented 5 years ago

I sat down and white-boarded this with some colleagues and I think we came up with something that works. I would love someone to poke a hole in it immediately. I came up with one attack below and a potential mitigation.

Modifications to the protocol:

Say node A with the private key a (and public key aP) was given an incorrect update consisting of an ECIES ciphertext: (c, rP), where c is the encryption of the node secret, r is the random value and P* is the public group point. Node A can provide a proof that this value is inconsistent with the other updates in the tree without revealing their private key with the following reveal:

arP, DLEQ(arP:rP :: P:aP)

With this information, any validator would compute k = KDF(arP), decrypt c with k, then know the node secret that was encrypted to node A. Since this node secret is bogus, they would hash it upwards with the public salt and compare with the salted hashes provided with the other updates until the discrepancy is found.

Why the Schnorr NIZK is necessary: At first look, this seemed to be a valid solution since the DLEQ proof prevented A from lying about their ECIES key in the computation of k. However, while writing this down, I came up with a potential adversarial construction that makes this reveal more dangerous. A crafty adversary could choose the random value r to be related to a previously used random value in such a way that revealing arP would compromise a previous group state. For example, say that a previous update to A used the random value r (and therefore rP is public). The attacker B could choose the random value rP that they would include as part of their ECIES ciphertext to be rP+P = (r+1)P and a random ciphertext c. In this case, party A would be forced to reveal a(r+1)P, which anyone can subtract a known public value for aP to get arP, which was used to protect previous group state. This makes it far less likely that A would report the attack since doing so would reveal the previous group state and compromise confidentiality. The Schnorr NIZK proves that an rP chosen by the attacker can't just be a manipulation of previous r values. They really have to know r. The caveat here is that the attacker can still choose the same r as in a previous message and break confidentiality, but repeated r values should be strictly disallowed anyway.