Open siv2r opened 5 months ago
Option 2: we can check for invalid id values inside NonceAgg or PartialSigAgg
If we plan on doing these checks, we should also consider additional checks like:
\text{MIN\_PARTICIPANTS} \le \text{len}(id_{i..u}) \le \text{MAX\_PARTICIPANTS}
\text{MIN\_PARTICIPANTS} \le \text{len}(\text{pubnonce}_{i..u}) \le \text{MAX\_PARTICIPANTS}
which would require the NonceAgg and PartialSigAgg functions to also take MIN_PARTICIPANTS and MAX_PARTICIPANTS as input parameters.
Participant IDs can be random, in which case $1 \le id_{i} \le \text{MAXPARTICIPANTS}$ will not be true.
If I'm understanding this correctly, with either option, it's the responsibility of the caller of the API to assemble the correct participant identifier list. We should certainly validate the data to the extent possible, but I think the only things we can check are for invalid secp256k1 scalars and duplicates.
Participant IDs can be random, in which case 1≤idi≤MAX_PARTICIPANTS will not be true.
In BIP DKG, participant ids are long-term pubkeys of the participants, but internally (when it comes to Lagrange coefficients), we just use indices 1...n.
I think there's some meta issue. With Jesse working on the implementation, Jonas and me working on BIP DKG, and Sivaram working on the signing BIP, it seems we have diverged on some design decisions, and also some of the terminology. We should probably synchronize, but I'm not entirely sure what's the best process. It may be a good idea to wait for Jonas, who is currently out of office.
In BIP DKG, participant ids are long-term pubkeys of the participants, but internally (when it comes to Lagrange coefficients), we just use indices 1...n.
Interesting, I'm curious to learn more about how we map from pubkeys to indices. Currently, in the implementation, we pass the pubkey to a hashing function to generate an index hash, and we don't get monotonically increasing integers, but rather randomized hash integers.
We should probably synchronize, but I'm not entirely sure what's the best process. It may be a good idea to wait for Jonas, who is currently out of office.
Some synchronous time when Jonas is available sounds great.
Interesting, I'm curious to learn more about how we map from pubkeys to indices.
What we currently do in BIP DKG is simply to expect the caller to provide an (ordered) list of pubkeys, and the position in the list is the index (where the first index is 1 instead of 0). The caller is free to pre-sort the list if they explicitly don't care about ordering. This is similar to key aggregation in the MuSig2 BIP.
Currently, in the implementation, we pass the pubkey to a hashing function to generate an index hash, and we don't get monotonically increasing integers, but rather randomized hash integers.
Yeah, this sounds like a great topic for bike shedding. :) IIRC we considered these (tiny) advantages of indices:
The disadvantage of hashing is that the implementer needs to be careful not to use index 0. But we found this risk to be acceptable because it's on the side of the implementation and not pushed to the user.
Ah, that makes sense.
In NonceAgg or PartialSigAgg, we can assign blame to a signer in two ways:
Method 1: Index of Invalid Value
Method 2: Participant Identifier
I went with Method 2 because FROST includes the participant identifier parameter, which is absent in MuSig2. However, this approach has the following issue: If the participant identifier list contains invalid ids, we can’t accurately assign blame.
How to fix this?
Option 1: we just assume all the values in the identifier list are valid.
Option 2: we can check for invalid id values inside NonceAgg or PartialSigAgg
cc @jonasnick @real-or-random @jesseposner