alpaka-group / alpaka

Abstraction Library for Parallel Kernel Acceleration :llama:
https://alpaka.readthedocs.io
Mozilla Public License 2.0
353 stars 72 forks source link

`alpaka::getWarpSizes` incurs a noticeable overhead #2192

Open fwyzard opened 10 months ago

fwyzard commented 10 months ago

While porting the CMS pixel reconstruction from native CUDA to Alpaka, it was noticed that the use of the alpaka::getWarpSizes(device) function incurs a noticeable overhead.

See https://github.com/cms-sw/cmssw/pull/43064#issuecomment-1817590926 for the discussion.

A possible workaround is to cache the warp size in our code, instead of querying it for every event.

However, it would seem natural to cache this information within the Alpaka device objects, instead of querying the underlying back-end each time.

fwyzard commented 10 months ago

I think that caching the warp sizes inside the device object would require

psychocoderHPC commented 10 months ago

IMO caching makes sense, we should store the value during the device creation then there will be no need for a mutex.

bernhardmgruber commented 10 months ago

Is there a CUDA device with a warpSize not 32? I am almost in favor of hardcoding it ... Otherwise, we could just collect and cache the entire device properties (i.e. cudaDeviceProp), so we can also serve other values faster.

fwyzard commented 10 months ago

Not that I know of.

But HIP devices can have a warp size of 32 or 64, depending on the GPU model and potentially on the environment settings.

psychocoderHPC commented 6 months ago

Partly solved by #2246. Never the less we should cache all over runtime constant device properties within the device, than there is no need to query the API multiple times.