Closed StephenLasky closed 2 years ago
if you use CUDA_VISIBLE_DEVICES= you pretty much have full control over what is going on and it does not interfere with other processes on the gpus. For example I am able to run 8 generation data pipeline using nvisii on 8 gpu machine whereas a single gpu gets a single job. I do not think it would help to specify on the python side since it is pretty easy to do from the command line.
It appears that the only way to force this to use only one GPU is through a global CUDA setting. I imagine this would interfere with other CUDA Jobs on the machine.
Would it be easily to implement fine-grained selection of GPU device through the python API?