Open ronaldtse opened 7 years ago
Having simply a time would be already useful I think. Nevertheless, we could have both approaches implemented in a single tool (let's say rnp speed
). RNP could use perf's API which can count cpu cycles (example of usage is here: https://github.com/IAIK/armageddon/blob/master/libflush/libflush/timing.c)
Valgrind also have interesting API which could be useful for checking heap usage (I'm thinking about plugin called massif - http://valgrind.org/docs/manual/ms-manual.html).
Probably there are other tools/libraries which we could reuse
Agree on using Perf API and massif — but I wonder if we should apply the monitoring tools through the public API or directly to the internal code, or to both separately?
I think performance for of our public API is what's most interesting for us and users of the API, in particular API of repgp
and/or rnp(2)
as that's when processing will happen.
I think, I would like to see something like Celero calling our API and performing benchmarking, as a first step
Agreed!
Description
Right now our benchmarks with GnuPG are rather crude.
We should have something like OpenSSL's
openssl speed -evp <algo>-<mode>
so we can check the speed of the popular combinations (block cipher algo/mode, public key algo, hash algo), such as 'aes-cfb-rsa-sha256' or 'sm4-ctr-sm2-sm3'. Later on we can test AEAD GCM / OCB too.Sample output for
openssl speed
:On the other hand, we should also have some benchmarks that traces CPU and memory usage during the run (e.g., output as CSV) so we can easily see the obvious problems.
We could produce a static trace per command execution, something like:
Such as using
pidstat
(on Linux). Or much more serious like http://www.brendangregg.com/perf.html.Thoughts?
cc: @ni4 @dewyatt @flowher