NVIDIA / cccl

CUDA Core Compute Libraries
https://nvidia.github.io/cccl/
Other
1.21k stars 153 forks source link

Provide span abstractions for host and device memory #752

Open jrhemstad opened 3 years ago

jrhemstad commented 3 years ago

I would like Thrust to provide the equivalent of a std::span, i.e., a non-owning view of a contiguous sequence of objects.

Like host_vector and device_vector, I believe there should be separate host_span and device_span (and universal?) classes to indicate if the data is safe to touch from host or device.

A key requirement is that device_span must work to pass by value and is usable directly into CUDA kernels, e.g.,

__global__ void kernel(device_span<int> a, device_span<const int> b){
   if(tid < b.size()){
      a[tid] = b[tid] * 42;
   }
}
hcedwar commented 3 years ago

Another strategy is to follow the cuda::atomic<T,scope> vs. cuda::std::atomic<T> pattern for extension; for example, cuda::span<T,Extent,Property> where Property denotes the memory space.

griwes commented 3 years ago

@hcedwar yeah, I think we'll end up doing something like that for libcu++; however, the Thrust types are more involved and should follow the pattern of allowing access to device-only memory from the host by copying it when used with non-managed memory. These would serve a slightly different usage pattern than the libcu++ ones.

brycelelbach commented 3 years ago

I think Carter is correct that we will have tagged span types, just as vector_base and pointer are tagged.

On Mon, Nov 16, 2020 at 8:23 PM Michał Dominiak notifications@github.com wrote:

@hcedwar https://github.com/hcedwar yeah, I think we'll end up doing something like that for libcu++; however, the Thrust types are more involved and should follow the pattern of allowing access to device-only memory from the host by copying it when used with non-managed memory. These would serve a slightly different usage pattern than the libcu++ ones.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/cccl/issues/752, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADBG4WTIEEJCZJZPIZT2CDSQH3E5ANCNFSM4TNB46EQ .

-- Bryce Adelstein Lelbach aka wash (he/him/his) US Programming Language Standards (PL22) Chair ISO C++ Library Evolution Chair CppCon and C++Now Program Chair C++ Core Compute Libraries (Thrust, CUB, libcu++) Lead @ NVIDIA

alliepiper commented 3 years ago

@hschwane submitted https://github.com/NVIDIA/thrust/pull/1407, which implements a vector_reference that provides a non-owning view, similar to span. We'll wait for the proper span implementation to settle instead of merging a temporary workaround into main, but folks may be interested in that patch in the meantime.

fortminors commented 11 months ago

Hello! Are there any news on implementing this?

jrhemstad commented 11 months ago

Hello! Are there any news on implementing this?

libcu++ now provides <cuda/std/span>: https://godbolt.org/z/113Mhfv9h

However, there is no device or host specific span type yet. That is something we're still talking about though.