Open RKSimon opened 6 years ago
This is a good idea.
From a design perspective I think it's better to have a script that runs llvm-mca and llvm-exegesis separately so that the two tools don't depend on each other.
I think that makes sense. Internally we're using a script that compares 'real-world' and semi-randomized instruction sequences to compare llvm-mca's estimated IPC against the measured IPC using Linux perf and a simple asm harness to run the snippet in a loop. We're getting some nice results from that; potentially we should consider upstreaming that to llvm/utils or similar. Right now it's quite btver2 specific though.
We could certainly consider expanding that to incorporate llvm-exegesis results.
This is a good idea.
From a design perspective I think it's better to have a script that runs llvm-mca and llvm-exegesis separately so that the two tools don't depend on each other.
For llvm-exegesis the inputs would be:
I'm unsure how precise we can be on the number of cycles/uOps.
@llvm/issue-subscribers-tools-llvm-exegesis
Author: Simon Pilgrim (RKSimon)
Extended Description
For a given code snippet, it would be very useful for llvm-exegesis to compare the actual performance vs llvm-mca's prediction.
This could be driven by code snippets from real code or possibly from a fuzzer