JuliaGPU / KernelAbstractions.jl

Heterogeneous programming in Julia
MIT License
379 stars 66 forks source link

How do I detect what GPU is installed on a host? #385

Open pitsianis opened 1 year ago

pitsianis commented 1 year ago

I want to build a standalone module that can run on any supported GPU. How do I detect what packages need to be loaded so that I can have a pattern like

if isCUDAsupported()
  using CUDA
  backend = CUDABackend()
elseif isMetalsupported()
  using Metal
  backend = MetalBackend()
elseif etc

else
  @warn "No supported accelerator detected. GPU kernels will be executed on the CPU!"
  backend = CPU()
end
vchuravy commented 1 year ago

I often say: The choice is up to the user.

Experience has shown that having GPU backends as dependencies can cause issues, when one backend is quicker to update than another.

  1. You let the user choose, by taking a KA backend as an input argument to your function
  2. You could use Preferences.jl to let the user choose which backend to load statically
  3. CUDA.jl as an example of how to use it optionally https://cuda.juliagpu.org/stable/installation/conditional/
pitsianis commented 1 year ago

I was looking at option 3, but I am unsure how to set it up for all KA-supported backends.

Great job, by the way. On the first try, I got a non-trivial bitonic sort to perform much better than the ThreadX.sort! on M2 and a Tesla P100.

Is the KA implementation expected to be 15-20% slower than the Metal version? Or am I doing something wrong?