UoB-HPC / BabelStream

STREAM, for lots of devices written in many programming models
Other
323 stars 110 forks source link

Updated hip kernels #139

Closed thomasgibson closed 1 year ago

thomasgibson commented 2 years ago

PR Summary:

Happy to discuss how much of this you want upstream. For reference (before/after) numbers, here are some quick results in double precision:

Reference (baseline - develop): Ran on an MI-210 with the following arguments with an array size of 2^28 elements: -s $((2**28))

Function    MBytes/sec  Min (sec)   Max         Average     
Copy        1406620.269 0.00305     0.00322     0.00308     
Mul         1404059.445 0.00306     0.00316     0.00312     
Add         1279936.374 0.00503     0.00522     0.00515     
Triad       1272501.462 0.00506     0.00524     0.00516     
Dot         745985.464  0.00576     0.00687     0.00626

Updated dot kernel:

Function    MBytes/sec  Min (sec)   Max         Average     
Copy        1406789.357 0.00305     0.00310     0.00307     
Mul         1407020.248 0.00305     0.00622     0.00315     
Add         1279350.509 0.00504     0.00522     0.00515     
Triad       1286205.887 0.00501     0.00525     0.00517     
Dot         1297750.809 0.00331     0.00344     0.00338
thomasgibson commented 2 years ago

Thanks for your comment!

Thanks for the PR. In general it looks OK, but I would like more details on the benefit for manually strip-mining the loop in the nstream kernel. I don't think we do this for any other model - why do we need to do it for HIP / AMD GPUs?

It's not strictly necessary, this was mostly a result of experimentation to generated vector load instructions from device memory in an attempt to boost the bandwidth performance of the kernel. This had a consistent, albeit minor, increase in bandwidth performance for n-stream. This was mainly useful during experimentation, but was hesitant to modify it any further as I recall from our last conversation that we want to avoid significantly "ninja'd" code in the main repo.

This was more impactful in older versions of ROCm, but since 5.2 we're not seeing a huge benefit from doing this. This is probably for the best, since our compilers are generating better instructions from simpler code. There's still some room for improvement and this kernel in particular is one I'm looking at including in our HBM stressors/benchmarks. I'm happy to leave it mostly the way it is. Will update this PR accordingly.