AcademySoftwareFoundation / openvdb

OpenVDB - Sparse volume data structure and tools
http://www.openvdb.org/
Apache License 2.0
2.71k stars 660 forks source link

[REQUEST] make nanoVDB CUDA async allocation optional so it can be used on vGPU #1798

Open w0utert opened 6 months ago

w0utert commented 6 months ago

Is your feature request related to a problem? Please describe.

Current nanoVDB implementation uses functions like cudaMallocAsync and cudaMemcpyAsync, for example in CudaDeviceBuffer when allocating or uploading data to the GPU. These functions are not available when using a vGPU that does not have unified memory enabled, which is common for example for GPU-enabled Azure VM's where the GPU is shared/sliced between multiple instances. Trying to run nanoVDB code on such a VM will result in CUDA 801 'not supported' exceptions.

Describe the solution you'd like

Projects such as PyTorch usually implement the async code paths using a switch to enable/disable them, plus a fallback path that uses synchronous functions. If nanoVDB had something similar, that would be the perfect solution, save for potential efficiency disadvantages the synchronous fallback paths could have.

Describe alternatives you've considered

For my situation there is not really an alternative, as I am not in a position to change hypervisor settings to enable unified memory support or use some other deployment target for the code I want to use with nanoVDB. The only option is to switch to a VM that uses passthrough GPU instead of vGPU, but again this is not something under my control.

w0utert commented 6 months ago

Some more information/corrections:

Based on this, I created PR #1799 that introduces macros CUDA_MALLOC and CUDA_FREE, and a define NANOVDB_USE_SYNC_CUDA_MALLOC that can be set by the host build system to force synchronous CUDA allocations.

This has been verified to work on the vGPU deployment target I'm using.