Hi. I tried to run the training on a SLURM cluster with multiple GPUs. The problem is when you run the code on clusters like this, the cluster scheduler decides which GPUs to assign to you. In the current code, it is required to pass the names of the GPUs as an argument, which does not work in the scenario I mentioned.
I edited the code to just accept whatever GPU is available, regardless of their names. I think it would be nicer to have it this way. If interested, I can send you the quick fix.
Hi. I tried to run the training on a SLURM cluster with multiple GPUs. The problem is when you run the code on clusters like this, the cluster scheduler decides which GPUs to assign to you. In the current code, it is required to pass the names of the GPUs as an argument, which does not work in the scenario I mentioned. I edited the code to just accept whatever GPU is available, regardless of their names. I think it would be nicer to have it this way. If interested, I can send you the quick fix.