Open ThomasBaruzier opened 2 weeks ago
Change GPU Id on the 7th line of code: https://github.com/ZFTurbo/MVSEP-MDX23-music-separation-model/blob/main/inference.py#L7
May I do a PR to support CUDA_VISIBLE_DEVICES?
Yes You can
Alright
Another unrelated question Is it normal that --large_gpu sometimes exceeds 24gb vram on headless linux? I'm using default settings
Hello,
I'd like to run the program in parralel using CUDA_VISIBLE_DEVICES=0 and CUDA_VISIBLE_DEVICES=1, with no success. Whatever the value, the first GPU is always selected. Is there a workaround or a possible solution?
Thank you