When I train with two GPUs using CUDA, I get a memory error message, torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 416.00 MiB (GPU 0; 23.64 GiB total capacity; 21.07 GiB already allocated; 405.69 MiB free; 22.77 GiB reserved in total by PyTorch)indicating that only one GPU is being called. Is there any way to train with two GPUs simultaneously?
When I train with two GPUs using CUDA, I get a memory error message, torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 416.00 MiB (GPU 0; 23.64 GiB total capacity; 21.07 GiB already allocated; 405.69 MiB free; 22.77 GiB reserved in total by PyTorch)indicating that only one GPU is being called. Is there any way to train with two GPUs simultaneously?