Problem: Our current nullifier implementation doesn't constrain the nullifier to a unique value. It allows the prover to choose between two square roots and thus, two different nullifiers (N and -N). This defeats the purpose of a nullifier, since the prover can double-spend after all.
Solution: Check that the parity of the square root returned by hashToGroup is 0.
Sketch of parity check:
function parity(x: Field) {
let [isOdd, xDiv2] = Provable.witness(provable([Field, Field]), () => {
let bits = x.toBits();
let isOdd = bits.shift();
return [isOdd, Field.fromBits(bits)];
});
// range check for 253 bits
// WARNING: this makes use of a special property of the Pasta curves,
// namely that a random field element is < 2^254 with overwhelming probability
// TODO use 88-bit RCs to make this more efficient
xDiv2.toBits(253);
// check composition
xDiv2.mul(2).add(isOdd).assertEquals(x);
return isOdd;
}
function assertEven(x: Field) {
parity(x).assertEquals(0);
}
Problem: Our current nullifier implementation doesn't constrain the nullifier to a unique value. It allows the prover to choose between two square roots and thus, two different nullifiers (N and -N). This defeats the purpose of a nullifier, since the prover can double-spend after all.
Solution: Check that the parity of the square root returned by hashToGroup is 0.
Sketch of parity check: