serai-dex / serai

Other
246 stars 46 forks source link

1-round DKG without re-attempts #576

Open kayabaNerve opened 3 weeks ago

kayabaNerve commented 3 weeks ago

We currently have a 2-round DKG, where we perform re-attempts as necessary.

https://eprint.iacr.org/2024/397 proposes a 1-round DKG. Since we don't vary our threshold upon re-attempt, we can do the 1-round DKG where the first t entities to participate decide the key. Then, we get at least t confirmations of a lack of malicious behavior and we have a usable key.

We actually want the full n to confirm the keys validity. Once we have at least t confirmations, we set a timer to expire in (n-c)w + w (where c is the amount of confirmations) time since the we received t confirmations. If we receive a new confirmation, we recreate the timer. Only once we have n confirmations or the timer expires would we move forward.

This has the benefit where if we have 100 validators, performing a DoS on 33 of them to attempt to exclude them from the DKG will incur a time delay of 33w and adds resistance against such classes of attack.

Parties that don't participate on-time would be removed on-chain (as currently done).

If a broken key share is identified during the timer, we would reselect the participants to be the first t participants excluding the ones malicious.

kayabaNerve commented 3 weeks ago

For the eVRF implementation, https://github.com/kayabaNerve/fcmp-plus-plus/ has all of the code necessary.

kayabaNerve commented 6 days ago

The issue with the above protocol is that there's a risk of creating a t-of-t multisig. This is a risk as we only have t people confirm they received valid signature shares. The below is a proposal for a new method of transmitting the secret shares such that one can efficiently prove they transmitted correct shares.


To recap, we have an elliptic curve T with scalar field p. We then have elliptic curve E with field p and scalar field l.

The eVRF premised DKG is initialized with n public keys on the elliptic curve E, one per participant. Each participant performs ECDHs on E with their key and points sampled with a hash to curve by the group to produce deterministic, efficiently verifiable, secret points C_i. The x coordinates of these points (specifically, the sum of the x coordinates of two points) become the coefficients of the polynomial over the scalar field of T.

The eVRF paper simply says to send the secrets to each party. We assume a verifiable broadcast channel which orders the messages broadcast over it. We broadcast the commitments to the coefficients (with the relevant eVRF proof) and the encrypted secret shares here.

The novel part now introduced is proving the encryptions are correct. We detail the existing scheme for reference. This is the same scheme used for the PedPoP DKG, which was audited by Cypher Stack:

1) When broadcasting the commitments, the receiver publishes an ephemeral public key to that session. 2) When sending secret shares, the sender generates an ephemeral secret key. They perform an ECDH of the ephemeral secret key and ephemeral public key of the recipient to obtain the encryption key for a stream cipher (used to encrypt the message). They publish the ephemeral public key, a PoK for its DLog, and the encryption.

This achieves quite strong properties.

It allows proving an encryption was malformed. If a received share fails to decrypt/is invalid, it can be broadcast potentially with the ECDH and a proof the ECDH is correctly formed. If the format is invalid, or the PoK, the sender is faulty. If the format and PoK are correct, and the ECDH isn't provided, the accuser is faulty. If the ECDH is provided with a faulty proof, the accuser is faulty. If the decrypted message is incorrect, the sender is faulty. If the decrypted message is correct, the accuser is faulty.

It also only decrypts messages the sender had the discrete logarithm of the ephemeral key for. If a sender wanted to burn their reputation, yet in doing so learn about another message, they could create a malformed message using someone else's ephemeral message key. Since the PoK is binding to the sender/receiver, and the sender cannot forge a PoK for someone else's key, this will fail and no ECDH will be included in the blame.

It cannot be directly applied to the eVRF model as we cannot announce ephemeral receiving keys in the first round. Instead, we reuse the eVRF keys. Since the eVRF keys are used to derive secrets of the form ECDH(eVRF key, hash to point output), this is safe since one cannot provide a PoK for the output of a hash to point (assuming a secure hash to point). It does somewhat overload the eVRF key, but it's safe in practice.

We propose the following scheme such that the encryption can have a ZK proof it's well-formed, removing the entire blame process:

1) When sending secret shares, the sender generates 2 * t ephemeral secret keys d_i, e_i on the curve E. 2) They perform an eVRF to prove that (d_i * R_i).x + (e_i * R_i).x is the discrete logarithm of a commitment D_i. 3) They define the encryption of the secret share s_i as s_i + (d_i * R_i).x + (e_i * R_i).x.

Anyone can verify the encrypted secret share is correct by checking s_i * G == V_i + D_i (where G is a generator of the curve T, V_i is a commitment to the expected secret share derivable from the public commitments).

This is notable as never revealing an ECDH. It solely expects the CDH to be hard over the curve E, the output of the eVRF uniform (as proven by the eVRF paper), and the encryption of s_i to be computationally indistinguishable (the commitments to each property means it cannot be perfectly hiding).

This would ensure the result of the DKG is always a t-of-n so long as t honest parties participate.

kayabaNerve commented 3 days ago

We also could've used a class group for verifiable encryption, yet this solely relies on the (C/D?)DH over the embedded elliptic curve.

kayabaNerve commented 3 days ago

This isn't unbiased. By modifying the protocol to only use a subset t, selection of the subset t enables bias. You'd need an unbiased ahead of time definition (yet the proposed use-case uses the first t participants). That's fine for our use cases, and it should be possible to make this unbiased with a second round (while maintaining robustness, via doing the first round over an alt generator H then revealing over G), but that's out of scope for now.