Closed mfredriksz closed 3 years ago
Closing the issue because I realized that the training was running, there just wasn't any logging occurring. If someone else faces this, just look in your output directory. There should be sample images that get updated throughout training.
@mfredriksz Just curious, which GPU are you using for training this? How much GPU memory do you need to use to train this and does the code use multiple GPUs?
Hello,
I am facing this issue while running
train_seg_gan.py
(both on single and multiple GPUs) where the training will get to the first iteration and then get stuck there. My GPU utilization remains constant and there is no further logging.This is the output I am getting:
Once it reaches this point, nothing further happens. I ended up canceling the run after an hour of being stuck here. I am using the CelebAMask dataset for training.
Pytorch 1.4.0, CUDA Version: 11.0, Python 3.6.13
I appreciate any help you're able to provide me with!