Closed hero-y closed 2 years ago
When I train with 8 GPUs, gpu1-gpu7 occupies about 5000M memory, but gpu0 occupies about 16000M memory. Is there a problem?
Looking forward to your reply!
The code doesnt explicitly handle the GPU allocation, @hero-y. This is done by PyTorch.
When I train with 8 GPUs, gpu1-gpu7 occupies about 5000M memory, but gpu0 occupies about 16000M memory. Is there a problem?
Looking forward to your reply!