alpaka-group / alpaka

Abstraction Library for Parallel Kernel Acceleration :llama:
https://alpaka.readthedocs.io
Mozilla Public License 2.0
356 stars 74 forks source link

Improve the behaviour of alpaka buffers with respect to asynchronous operations #2417

Open fwyzard opened 2 weeks ago

fwyzard commented 2 weeks ago

While reviewing the use of alpaka buffer in the CMS code, we have seen a recurrent pattern that relies on the (undocumented) behaviour of the underlying back-ends.

Consider this example:

using Host = alpaka::DevCpu;
using HostPlatform = alpaka::PlatformCpu;
auto host = alpaka::getDevByIdx(alpaka::PlatformCpu{}, 0);

using Platform = ...;
using Device = alpaka::Dev<Platform>;
using Queue = alpaka::Queue<Device, alpaka::NonBlocking>

Platform platform{};
auto device = alpaka::getDevByIdx(platform, 0);
Queue queue(dev);
auto dbuf = alpaka::allocBuf<Elem, Idx>(device, extent);
{
#if use_1_a
    // 1.a allocate the host buffer in pageable system memory
    auto hbuf = alpaka::allocBuf<Elem, Idx>(host, extent);
#elif use_1_b
    // 1.b allocate the host buffer in pinned system memory
    auto hbuf = alpaka::allocMappedBuf<Elem, Idx>(host, platform, extent);
#else
    // 1.c allocate the host buffer in async or cached system memory (CMS only)
    auto hbuf = alpaka::allocCachedBuf<Elem, Idx>(host, queue, extent);
#endif
    // 2. fill the host buffer buf
    std::memset(hbuf.data(), ...);
    // 3. copy the content of the host buffer to the device buffer
    alpaka::memcpy(queue, dbuf, hbuf);
    // 4. the host buffer goes out of scope before the asynchronous copy is guaranteed to complete
}

In principle we can observe different behaviours depending how the buffer was allocated in 1. and on what device back-end is being used:

Note: allocCachedBuf(host, queue, extent) is a CMS implementation similar to allocAsyncBuf(queue, extent). I'm working to improve its performance and eventually upstream it to alpaka :-)


Given that even such a simple example is error prone, we have been wondering how we could improve the situation.

A couple of ideas:

fwyzard commented 2 weeks ago

@makortel FYI

mehmetyusufoglu commented 2 weeks ago

Q1 - There are functionsalpaka::allocAsyncBuf and alpaka::allocAsyncBufIfSupportedwhich use queues. Are we assuming that the user deliberately selected allocBuf for a reason? Q2 - Cant we create something like alpaka::allocAsyncMappedBuf similar to allocAsyncBuf?

fwyzard commented 2 weeks ago

Q1 - There are functionsalpaka::allocAsyncBuf and alpaka::allocAsyncBufIfSupportedwhich use queues. Are we assuming that the user deliberately selected allocBuf for a reason?

Yes.

As a library, alpaka shouldn't be restricting what users are supposed to do, though of course it can restrict what is or isn't supported. But it would be nice to be able to catch at compile time (better) or at run time what is and isn't supported.

Q2 - Cant we create something like alpaka::allocAsyncMappedBuf similar to allocAsyncBuf?

There is no native support for this in CUDA, ROCm, etc. We have implemented it in the CMS code, it is what I am referring to as allocCachedBuf.