Open iamsh4shank opened 4 months ago
Make sure you've got the latest version of CUDA and cuDNN installed along with the latest NVIDIA GPU drivers. Install CUDA toolkit and make sure to match the CUDA version with the one supported by your PyTorch installation. You can try Anaconda or Miniconda to manage your python environment as they help avoid conflicts with system packages. Install PyTorch with GPU support using the appropriate version for your CUDA installation If you're using multiple GPUs, consider installing NVIDIA NCCL (NVIDIA Collective Communication Library) for optimized GPU communication Set the following environment variables in your training script to enable multi-GPU training Execute your training script with the necessary commands to utilize multiple GPUs
Hope it Helps!!!
Context
I am training a ResNet50 on ImageNet-1k using this script, it takes around 2 hours for one epoch and as I have to train for 90 epochs then it takes a lot of time to finish the training. I even tried to distribute it for 4 GPUs but still same results.