Open che85 opened 1 month ago
The script uses the first available GPU. We could use CUDA_VISIBILE_DEVICES
environment variable, but I am not sure how this behaves when running a server which will run a subprocess with sys.executable
Since we already pass several configuration parameters via command-line argument, we could just add one more optional device
argument:
def main(model_file,
image_file,
result_file,
save_mode=None,
image_file_2=None,
image_file_3=None,
image_file_4=None,
device=None,
**kwargs):
...
if device is None:
device = torch.device("cpu") if torch.cuda.device_count() == 0 else torch.device(0)
else:
device = torch.device(device)
...
It would be helpful to have an option to specify which GPU to use when running inference on a machine with multiple GPUs. In my case, I am running multiple MONAILabel servers, each with its own dedicated GPU.