Closed monitor22 closed 1 month ago
Increase your batch size and watch the magic happen ! It's an empiracal process but quite effiicent ;)
Can I let it use more GPU?
@TugdualKerjan has answered your question :))
I have increased the batch size to 128 and its been like this for ~12 hours.
It appears that the training process may not be utilizing the GPU as expected. Here are some steps to diagnose and potentially resolve the problem:
Check GPU availability:
Run torch.cuda.is_available()
to verify if PyTorch can detect an available GPU.
Verify CUDA installation: Ensure that CUDA is properly installed on your system and that your PyTorch version is built with CUDA support. You can check this by running:
import torch
print(torch.__version__)
print(torch.version.cuda)
Can I let it use more GPU?