Closed rillian closed 1 year ago
Benchmarking is hard, of course. Especially in ci. Are there approaches that would work for automated verification? Maybe something with the Linux retired instructions performance event counter, or an instrumented emulator? Actually running a timing attack to measure the statistical leakage?
You got me thinking, and I decided to use ptrace in single-step mode to count (and track) the executed instructions. It's less work than an instrumented emulator, and more robust than performance counters or timing measurements.
Of course, that doesn't prove that the optimizer will never do the wrong thing. And since ptrace
is very operating-system specific (from a quick look, the API for ptrace is slightly different on BSDs, Windows has its own thing, and I don't know about WASI), for now this test is restricted to Linux. But it's made me more confident in this code, and might open the path to in the future using core::hint::black_box
instead of inline assembly.
I see some doctests which verify comparison, and a github actions script to test compilation for a variety of targets, but nothing that actually verifies the implementation is constant time in a particular build. Given the general difficulty of shielding timing-sensitive algorithms from the optimizer, it would be reassuring to have something like that included.
Benchmarking is hard, of course. Especially in ci. Are there approaches that would work for automated verification? Maybe something with the Linux retired instructions performance event counter, or an instrumented emulator? Actually running a timing attack to measure the statistical leakage?