DavidDiazGuerra / gpuRIR

Python library for Room Impulse Response (RIR) simulation with GPU acceleration
GNU Affero General Public License v3.0
477 stars 91 forks source link

How to customize usage of GPU #23

Open Nelsonvon opened 3 years ago

Nelsonvon commented 3 years ago

Hi

The computation will always run on the first graphic card (cuda:0). Is there anyway to customize which card to used?

Besides I met error while simulating wav in a pytorch Dataloader with multi sub-processes (num_workers > 0). The processing breaks and return initialization error of gpuRIR. Does anyone notice this problem and know how to solve it?

thx

Best regards, Nelson

DavidDiazGuerra commented 3 years ago

Hi Nelson,

At this time, the library doesn't include the option to choose the GPU. It would be a nice feature to add in the future, but I have neither the time to implement it right now nor a multi-GPU machine to test it.

About the Pytorch Dataloader with multi sub-processes, I haven't used gpuRIR in that context, but I think muli sub-processes are typically used when the Dataloader runs in CPU so you can generate your batch in CPU while the neural network runs in the GPU. Could you be running out of GPU memory?

Best regards, David

YaguangGong commented 2 years ago

I also encountered the Dataloader issue. It seems to be caused by the default start method of torch.multiprocessing. The CUDA runtime does not support the fork start method. Just use torch.multiprocessing.set_start_method() to switch from fork to spawn or forkserver. Here is the link. https://pytorch.org/docs/stable/notes/multiprocessing.html?highlight=set_start_method