RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.
Is your feature request related to a problem? Please describe.
For IVF-Flat ad IVF-PQ index building, large datasets are provided in host memory or as mmap-ed file. After the cluster centers are trained, both method streams through the whole dataset twice. Currently there is no overlap between host to device copies and additional data processing on the GPU.
Describe the solution you'd like
Use pinned buffers to copy the data to the GPU and overlap it with GPU side computation.
Additional context
Since the dataset can be larger than the physical (host) memory of the system, it is not possible to load the whole dataset into pinned memory.
Is your feature request related to a problem? Please describe. For IVF-Flat ad IVF-PQ index building, large datasets are provided in host memory or as
mmap
-ed file. After the cluster centers are trained, both method streams through the whole dataset twice. Currently there is no overlap between host to device copies and additional data processing on the GPU.Describe the solution you'd like Use pinned buffers to copy the data to the GPU and overlap it with GPU side computation.
Additional context
Since the dataset can be larger than the physical (host) memory of the system, it is not possible to load the whole dataset into pinned memory.
Index subsampling already use pinned buffers to overlap vector gathering and H2D copies https://github.com/rapidsai/raft/pull/2077/commits/548555766b9acb485000c24e32816d6d874f58b5
IVF-Flat and IVF-PQ streams through the whole dataset here:
We use batch_load_iterator to copy the data to host. Ideally, we could improve the batch load-iterator to prefetch the data into a pinned buffer.