Open alliepiper opened 3 years ago
I'd be interested to help out here!
I can help as well if you need extra hands.
Initial work is in NVIDIA/thrust#14.
@allisonvacanti what's next here? NVIDIA/thrust#14 helped close the gap, but I don't recall exactly how far it got us or what we still need to do. RAPIDS is making a push to formalize and analyze our benchmarks more, so migrating fully to nvbench
is probably going to become a priority in the near future and I'm happy to help out in making sure that we have sufficient feature parity with google bench.
CC @shwina in case you want to continue being involved too.
Basic thresholding and comparing multiple files was added in NVIDIA/thrust#48
There's a lot we could still do, such as filtering the results by benchmark name/index and axis values. But I don't think these are essential right now.
Are there any "must have" features for RAPIDS that we're missing?
BTW, I'm working on a branch that makes some changes to the JSON file layout to make things more consistent. I hope to have that merged by the end of the week, time permitting 🤞
I assume the changes you're referring to are NVIDIA/thrust#70? It looks great! 🎉
@robertmaynard @jrhemstad @harrism any thoughts on what we would need to see in nvbench to make the transition from gbench smooth for RAPIDS?
The gbench compare script had the ability to do a U Test between two samples to determine if there was a statistically significant difference between the populations. Do we have anything like that in nvbench yet?
It's very helpful when looking at small differences in performance to establish if the difference is just "noise" or actually meaningful.
@vyasr Yep! That PR has all of my pending changes to the JSON/Python stuff.
@jrhemstad We don't have anything like that at the moment.
One feature that I'd like to see at some point is the ability to compare performance between different benchmarks that use the same axes.
For example, see NVIDIA/cccl#720, which points out that thrust::all_of
is slower than thrust::count_if
. It'd be nice to be able to write some automated tests that check the performance of equivalent algorithms and identify these sorts of issues.
NVBench has a work-in-progress JSON output format and I'm working on a very basic python script to compare two JSON files.
We should grow this functionality into a more complete set of analysis tools. At minimum, this should cover the features provided by Google Benchmarks' excellent comparison scripts.
If anyone is interested in writing some python to help with this, let me know. I'll update this issue once I have finalized the JSON output format.
Basic Regression Testing
compare.py baseline.json test.json
compare.py --gpu-threshold 5 baseline.json test.json
(gpu-threshold, cpu-threshold, batch-threshold)compare.py baseline.json --run test.exe -b 3 -a T=[I32,U64] -a Elements[pow2]=30
These should:
Analysis modes
Compare benchmarks with different names. Answers questions:
T
vs.U
for a variety of input sizes?These will need some way of specifying the sets of configurations to compare. Google benchmark has worked out a general syntax for specifying this, we should adapt what they've done to use the NVBench axis syntax.
Output
Ideally markdown formatted, similar to NVBench's default output.
References