If I'm reading correctly, we're decoding a point representation (outside this class), reencoding it here, and relying on the fact that other code enforced a canonical representation. That seems ugly and error-prone.
@str4d
This is an artifact of @ebfull's desire to represent Signature in-memory as the unserialized values. We could alternatively represent it as the serialized string, and unserialize it every time we verify the signature. Or we could store both Rbar and R in the signature, which I don't like because it feels like it could get out-of-sync.
@daira:
It was an intentional part of the design (inherited from EdDSA) that you hash the representation of R exactly as given in the signature. That implies that any change to the representation R̲ will result in a completely different hash, and so it's not possible to malleate the signature by changing R̲. (It still might be by changing S̲, but that's a simpler encoding which can be checked just using an integer comparison.)
I realize it is equivalent if the representation of R is ensured to be canonical, but then you have two places to look in the code to check that the signature is being verified correctly. As specified, a RedDSA signature is a byte sequence, not a (point, integer) pair, and so I recommend that it be implemented that way for ease of auditing.
From https://github.com/zcash-hackworks/sapling-crypto/pull/64#discussion_r183073610 :
@daira:
@str4d
@daira: