Closed maleadt closed 6 months ago
KernelAbstractions uses Atomix.jl since it is otherwise impossible to use atomic operations across backends.
I will update UnsafeAtomicsLLVM for LLVM 5.0, #1790 is to reduce the dependency there to actually implement them in CUDA.jl (I think I can do it in two steps).
Regarding the IO output, I don't remember why we didn't capture that., That should just be a quick PR to KA's testsuite.
I think these have been fixed.
Now that a KA.jl back-end is part of CUDA.jl and being tested on CI, I encountered a couple of issues:
I'm not sure why we're already using Atomix.jl here, ref. https://github.com/JuliaGPU/CUDA.jl/pull/1790?
cc. @vchuravy