Closed OussamaDanba closed 4 years ago
Thanks, Oussama. I see two options: 1) revert the definition of NTRU_SAMPLE_IID_BYTES and NTRU_SAMPLE_FG_BYTES. Then, in the avx implementation, change the buffer lengths to PAD32(NTRU_SAMPLE_IID_BYTES) and PAD32(NTRU_SAMPLE_FG_BYTES) and fill the excess with zeros. The two implementations will make identical use of the output of randombytes, which will fix the KAT issue.
2) Expand all of the randomness that we need using SHAKE (or NIST's "seedexpander" function). This will break the old KAT values, but might prevent problems like this in the future.
You've advocated for 2 in another context. I will accept a patch if you want to go this way.
I'd personally go for option one. It prevents changing the KATs and keeps the ref
implementation fairly minimal. The second option would be more performant but given that ntru is already quite fast and wouldn't gain too much from it I would do that later/in implementations where its really needed.
OK, are you making the change? Or should I?
I'd appreciate it if you could make the change.
On Tue, 29 Oct 2019 at 17:46, John Schanck notifications@github.com wrote:
OK, are you making the change? Or should I?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/jschanck/ntru/issues/7?email_source=notifications&email_token=AAI7Y75BPGRZ7AYWS3XB3ALQRBZH5A5CNFSM4JGKLSZKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECROUGI#issuecomment-547547673, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAI7Y7YYCCU53KYVRGDTY53QRBZH5ANCNFSM4JGKLSZA .
In commit 951a12a
NTRU_SAMPLE_IID_BYTES
changes fromNTRU-1
toPAD32(NTRU_N-1)
. This has the result thatNTRU_SAMPLE_FG_BYTES
andNTRU_SAMPLE_RM_BYTES
become longer which in turn leads torandombytes
retrieving more random bytes. As a result the polynomials are different which means the generated KATs no longer correspond to the KATs in the NIST submission.Reverting this change makes it produce correct KATs again although the vectorized
sample_iid
is likely to be incorrect then (I haven't tested this).