Currently the message is not hashed in round_1.go (this is even explicitely written in the comments on lines 21-22),
instead it is simply passed as a big.Int, without even checking it's in Z_q.
This means it is trivial to forge messages that match a signature for a previous message,
as it allows to find collisions easily.
Not hashed: I deliberately leave it to tss repo because different blockchain has different hash function. i.e. Ethereum Keccak-256 and Bitcoin sha256. The reason is to make tss-lib as much as flexible and doesn't need to cope with different hashing strategy for different blockchains.
Checking in Z_q: I will check it's in Z_q in the fix.
Find collisions: What do you mean?
Discrepancy in Finalize
In the last part of the signature generation protocol, each party is supposed to verify the final signature it computed
verifies under the group public key.
This is not the case currently, as such, any malicious party could make the group reconstruct invalid signatures.
Notice that also the reference paper states this check as necessary, as the signing procedure can potentially produce invalid signatures.
In the loop at the end of round 2 responsible for sending the messages, it appears only the latest message meant for
the last party is saved in the local temp variable :
// create and send messages
for j, Pj := range round.Parties().IDs() {
if j == round.PartyID().Index { // <--- notice this is already defined as being i
continue
}
r2msg := NewSignRound2MtAMidMessage(
Pj, round.PartyID(), round.temp.c1jis[j], round.temp.pi1jis[j], round.temp.c2jis[j], round.temp.pi2jis[j])
round.temp.signRound2MtAMidMessages[round.PartyID().Index] = &r2msg // <--- This seems wrong.
round.out <- r2msg
}
Also notice that round.PartyID().Index is actually i.
As a consequence, in the Update() function, message on index i is the one meant to be sent to the latest party.
This seems wrong as there is no need to store the latest message sent on index i of that table.
If the goal is simply to have a valid message on index i, so that the Update() still verifies, it seems a bit too
much to ove
Thanks, this line round.temp.signRound2MtAMidMessages[round.PartyID().Index] = &r2msg // <--- This seems wrong. is not needed as local party only need store messages peers sent to him.
I will delete it.
The PrepareForSigning() function is not checking its input
The input to PrepareForSigning() could be so that pax > len(ks) || pax > len(Xs), which would lead to a panic
in the loops.
// 2-4.
wi = xi
for j := 0; j < pax; j++ {
if j == i {
continue
}
kj, ki := ks[j], ks[i] // would panic if j > len(ks)
...
Recommendation: add sanity checks on the input.
Also, if kj == ki, notice the function will panic as it tries to compute the modular inverse of (kj-ki) on the next line.
A little bit further on line 36, if ks[c]==ks[j], the ModInverse would panic.
Notice these should never happen, but this is an exported function, which means it is best to check from a defensive coding point of view.
Thanks. Fixed!
Variable named after package
In Keygen/round_1.go, L88, the variable cmt is named like the package cmt, which hurts readability and maintainability.
cmt := cmt.NewHashCommitment(pGFlat...)
Renamed into commitment`
Extra big int allocation done in Proofs
In mta/proofs.go, there are extra allocation of big.Int that could be spared:
Round 1 CanAccept function only rejects if both messages are wrong
If msg1 is corrupted but msg2 is not corrupted, then the function would still return true.
func (round round1) CanAccept(msg tss.Message) bool {
if msg1, ok := msg.(SignRound1MtAInitMessage); !ok || msg1 == nil {
if msg2, ok := msg.(*SignRound1CommitMessage); !ok || msg2 == nil {
return false
}
}
return true
}
While this is unlikely to be the case, it is probably something we want to check here.
The checking against message is type assertion. If the msg is second type, then the msg must not be first type.
So the checking is actually saying "is this message neither type1 nor type2"?
Unnecessary computations in round 1
In round 1, in the case where j == i, the mta.AliceInit() function is still called, and the message is even stored in
round.temp.signRound1MtAInitMessages[i], whereas it is always skipped in the next rounds.
This could be skipped completely in round 1 as well as per the specification.
for j, Pj := range round.Parties().IDs() {
c, pi, err := mta.AliceInit(round.key.PaillierPks[i], k, round.key.NTildej[j], round.key.H1j[j], round.key.H2j[j])
if err != nil {
return round.WrapError(fmt.Errorf("failed to init mta: %v", err))
}
r1msg1 := NewSignRound1MtAInitMessage(Pj, round.PartyID(), c, pi)
if j == i { // this could be at the start of the loop to spare us the AliceInit computations
round.temp.signRound1MtAInitMessages[j] = &r1msg1 // unnecessary
continue
}
round.temp.signRound1SentMtaInitMessages[j] = &r1msg1
round.out <- r1msg1
}
It seems the function GetRandomGeneratorOfTheQuadraticResidue() used in GenerateNTildei() works only if its input
is the product of two safe primes, that is primes of the form p = 2q+1 for q another prime.
But the primes used in GenerateNTildei() are coming from its arguments and in
keygen/round_1.go, it appears the primes are generated using rsa.GenerateMultiPrimeKey() are not safe primes.
// Return a random generator of RQn with high probability. THIS METHOD
// ONLY WORKS IF N IS THE PRODUCT OF TWO SAFE PRIMES!
// https://github.com/didiercrunch/paillier/blob/d03e8850a8e4c53d04e8016a2ce8762af3278b71/utils.go#L39
func GetRandomGeneratorOfTheQuadraticResidue(n big.Int) big.Int {
r := GetRandomPositiveRelativelyPrimeInt(n)
return new(big.Int).Mod(new(big.Int).Mul(r, r), n) // YRO: could be simplified to r.Mod(r.Mul(r, r), n)
}
Unnecessary usage of multiple channels in concurrency situation
In the ecdsa signing rounds, there are multiple instances of 2n channels being created for n parties:
// it's concurrency time...
errChs1 := make([]chan tss.Error, len(round.Parties().IDs()))
for j := range errChs1 {
if j == i {
errChs1[j] = nil
continue
}
errChs1[j] = make(chan tss.Error)
}
errChs2 := make([]chan tss.Error, len(round.Parties().IDs()))
for j := range errChs2 {
if j == i {
errChs2[j] = nil
continue
}
errChs2[j] = make(chan tss.Error)
}
But one channel should be enough to gather all errors from all go routines.
Valid point! Fixed!
Unecessary initiliazation in loop.
In signing/prepare.go, L21, the value ki := ks[i] is initialized in every loop iteration, whereas it could be
initialized just once outside of the loop.
// 2-4.
wi = xi // ki could be initialized here
for j := 0; j < pax; j++ {
if j == i {
continue
}
kj, ki := ks[j], ks[i] // ki doesn't need to be initialized every time
...
Furthermore, copying ks[j] into kj is not useful, as it is only used twice on the line just after it.
Thanks fixed.
Implementation does not support more parties than the threshold
It seems the current signing algorithm is assuming as many parties as required by the threshold, and if there are more parties,
with indexes larger than Threshold+1, they would panic during signing.
See for instance, how pax and len(bigWs) are not related in PrepareForSigning() (see lines 28, 29 and 39 in prepare.go),
which means bigWs[pax:] is containing null pointers that would be used by the parties with indexes > pax in
round_2's Start().
ks (which bigWs computed from) is filtered in tss client, it's a subset of ks generated during keygen. So its client's response to make sure len(ks) == len(bigXs) == pax.
Currently in tss client we support no more signer than t+1 because there is no centralized server coordinate the signers. So its an offline agreement for each signing session. (For example due to network speed, if 3 clients started at the same time for signing (t = 1), A may regard B as its signer, but B may regard C as its signer) then all is messed up.
I added some check because this function is exported:
if len(ks) != len(bigXs) {
panic(fmt.Errorf("indices and bigX are not same length"))
}
if len(ks) != pax {
panic(fmt.Errorf("indices is not in pax size"))
}
Notice that this is consistent with the spec that says on page 16 "We assume that [the set of player participating] = t+1",
however this does not seem to be check within the library, instead it is assumed and could lead to panics if someone
where to run the protocol with more players than the threshold+1.
Variable could be initialized outside of the loop
In prepare.go, line 36, the value ks[c] is accessed at each iteration, it could be copied to a local variable
in the outer loop:
bigWj := bigXs[j]
for c := 0; c < pax; c++ {
if j == c {
continue
}
// big.Int Div is calculated as: a/b = a * modInv(b,q)
iota := modQ.Mul(ks[c], modQ.ModInverse(new(big.Int).Sub(ks[c], ks[j])))
bigWj = bigWj.ScalarMult(iota)
}
The signed message is not hashed
Not hashed: I deliberately leave it to tss repo because different blockchain has different hash function. i.e. Ethereum Keccak-256 and Bitcoin sha256. The reason is to make tss-lib as much as flexible and doesn't need to cope with different hashing strategy for different blockchains. Checking in Z_q: I will check it's in Z_q in the fix. Find collisions: What do you mean?
Discrepancy in Finalize
Sorry, I will add. [UPDATED 2019-10-17] Added: https://github.com/binance-chain/tss-lib/commit/9f398c92def051a66cb23d4f8087cf5d6422f7d4
Wrong value gets saved in round 2
Thanks, this line
round.temp.signRound2MtAMidMessages[round.PartyID().Index] = &r2msg // <--- This seems wrong.
is not needed as local party only need store messages peers sent to him. I will delete it.The PrepareForSigning() function is not checking its input
Thanks. Fixed!
Variable named after package
Renamed into commitment`
Extra big int allocation done in Proofs
Thanks. Fixed
Round 1 CanAccept function only rejects if both messages are wrong
The checking against message is type assertion. If the msg is second type, then the msg must not be first type. So the checking is actually saying "is this message neither type1 nor type2"?
Unnecessary computations in round 1
Fixed, thanks!
GetRandomGeneratorOfTheQuadraticResidue() assumes safe primes
Unnecessary usage of multiple channels in concurrency situation
Valid point! Fixed!
Unecessary initiliazation in loop.
Thanks fixed.
Implementation does not support more parties than the threshold
ks
(which bigWs computed from) is filtered in tss client, it's a subset of ks generated during keygen. So its client's response to make sure len(ks) == len(bigXs) == pax.Currently in tss client we support no more signer than t+1 because there is no centralized server coordinate the signers. So its an offline agreement for each signing session. (For example due to network speed, if 3 clients started at the same time for signing (t = 1), A may regard B as its signer, but B may regard C as its signer) then all is messed up.
I added some check because this function is exported:
Variable could be initialized outside of the loop
Thanks, Fixed