dotnet / jitutils

MIT License
144 stars 59 forks source link

Tool for exploring performance by varying JIT behavior #381

Closed AndyAyersMS closed 11 months ago

AndyAyersMS commented 11 months ago

Initial version of a tool that can run BenchmarkDotNet (BDN) over a set of benchmarks in a feedback loop. The tool can vary JIT behavior, observe the impact this modification on jitted code or benchmark perf, and then plan and try out further variations in pursuit of some goal (say higher perf, or smaller code, etc).

Requires access to InstructionsRetiredExplorer as a helper tool, for parsing the ETW that BDN produces. Also requires a local enlistment of the performance repo. You will need to modify file paths within the source to adapt all this to your local setup. Must be run with admin priveleges so that BDN can collect ETW.

The only supported variation right now is modification of which CSEs we allow the JIT to perform for the hottest Tier-1 method in each benchmark. If a benchmark does not have a sufficiently hot Tier-1 method, then it is effectively left out of the experiment.

The experiments on each benchmark are prioritized to explore variations in performance for subsets of currently performed CSEs. For methods with many CSEs we can realistically afford to only explore a small fraction of all possibilities. So we try and bias the exploration towards CSEs that have higher performance impacts.

Results are locally cached so that rerunning the tool will not rerun experiments.

Experiments are summarized by CSV file with a schema that lists benchmark name, number of CSEs, code size, perf score, and perf.

AndyAyersMS commented 11 months ago

cc @dotnet/jit-contrib

I have lots of ideas for building on this, but wanted to get an initial version checked in somewhere.