Open evelyne-ringoot opened 1 month ago
Thanks for reporting. It would be interesting to profile further using rocprof
and compare the trace with CUDA Nsight to see where the slowdown occurs when using copying. Seems the extra copies on AMDGPU keeps somehow the device busy avoiding it to perform the compute tasks at expected perf for small arrays.
Update: if in the same code timings[1,i]=mybelapsed(A,B)
is commented out, the second belapsed becomes slow too, I am really confused now
Creating a multitude of small copies for benchmarking slows AMDGPU.jl down a lot, something not observed in CUDA.jl. The solution for this specific code is to avoid allocations all together, but this is (maybe?) not possible with every type of code. (I also remember having had some issues with benchmarktools, but cannot manage to reproduce them right now) Sharing the code here for future reference:
Adding AMDGPU.unsafe_free! in every iteration does not solve this problem either, neither does turning GC off, and manually running
GC.enable(true); AMDGPU.unsafe_free!(Acpy); AMDGPU.unsafe_free!(Bcpy); GC.gc(); sleep(0.001); GC.enable(false);
between every iteration. The same code with AMDGPU replaced by CUDA (and ROCblasgemm by Acpy*Bcpy) shows barely any performance difference between both codes (even slightly better and more stable performance when using copies):Versions:
@jpsamaroo @vchuravy @pxl-th