Open giorgionicola opened 3 years ago
The GPU selection happens here: https://github.com/fzi-forschungszentrum-informatik/gpu-voxels/blob/master/packages/gpu_voxels/src/gpu_voxels/helpers/cuda_handling.cu#L102
Default behavior is to select the first device according to Nvidia's sorting, which usually selects the highest performance one, if there are different models.
If you want to modify the behavior, this is the place to do it.
In the upcoming release there will also be a slight improvement for this function, but the general operation stays the same.
GPU-Voxels is designed to use only one GPU, even if there are many available. What is your plan?
I am simply using a pc shared with other people in the lab that use the GPUs so i would like to determine which gpu to use since some might be already occupied.
I have found just to define the environment variable GPU_VISIBLE_DEVICE on the desired gpu i want to use
Tanks for your input, I didn't know about the CUDA_VISIBLE_DEVICES
environment variable.
It allows the restriction of visible GPUs for any CUDA application. please Let me know if you encounter any issues with this.
To monitor the mapping of processes to GPUs you can check the output of nvidia-smi
.
For others interested in this feature, NVIDIA describes it here: https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/
I am working on a PC with multiple GPU how can I set which device is used by GPU-Voxels?