Currently, all the high-level functions/objects—such as sign, derive_group_pubkey, nonce_agg, and session context—use 32-byte arrays to represent the participant identifier. These functions internally convert the byte arrays to integers whenever they need to compute the Lagrange coefficient or blame a disruptive signer.
Theoretically, a 32-byte array is required for the participant identifier since it represents a scalar value (< curve order). However, in practice, the t-of-n values would not be that large, so the identifier could be represented as a 4 or 8-byte integer instead.
identifier could be represented as a 4 or 8-byte integer instead.
Currently, all the high-level functions/objects—such as
sign
,derive_group_pubkey
,nonce_agg
, andsession context
—use 32-byte arrays to represent the participant identifier. These functions internally convert the byte arrays to integers whenever they need to compute the Lagrange coefficient or blame a disruptive signer.Theoretically, a 32-byte array is required for the participant identifier since it represents a scalar value (< curve order). However, in practice, the
t
-of-n
values would not be that large, so the identifier could be represented as a 4 or 8-byte integer instead.This is only possible when $id \le \text{MAXPARTICIPANTS}$. If we adopt a design where IDs can be random (see this https://github.com/siv2r/bip-frost-signing/issues/5#issuecomment-2175475776), we must stick to using a 32-byte array.