Closed jennydaman closed 1 year ago
Currently, the tinycudann modules are first allocated on the default GPU (cuda:0) and then moved to the desired device, e.g., (cuda:1). This won't work if cuda:0 has no enough memory. So I need to change the default GPU before creating modules.
Another solution (which is actually recommended by PyTorch) is to set the environment variable CUDA_VISIBLE_DEVICES='1' and then run the algorithm with device=0.
nesvor
has the option--device
to specify which GPU is to be used bynesvor
. However, the GPU selection is not passed down to the tiny-cuda-nn functions, which will always attempt to use GPU#0.We have a machine with 2 GPUs. GPU#0 is currently being used, so we try to run
nesvor --device 1
. However, the following exception occurs:I used nvtop to monitor GPU usage during the runtime of NeSVoR.
nesvor
started around -110snesvor
crashed around -60snesvor
crashes