Closed Suppersine closed 2 years ago
I think the only dataset I've ever run for that many epochs is the toy dset, but I will look into this. Is this a very small dataset, or are you using large regularization settings? If you aren't using --early_stopping
you may want to try that to prevent long training times.
I'm not 100% sure if that is an issue with GPU memory itself or maybe in the logging of gpu memory stats to wandb...
Python is refcounted (see https://discuss.pytorch.org/t/cuda-out-of-memory-on-the-8th-epoch/67288). This is not an issue with GPU memory as the python process would entirely crash as OOM and often written to /var/log/syslog. In other words, you wouldn't get the returned non-zero exit status 255.
error message.
This is triggered from the flag log_gpu_memory
in the train.py Pytorch Lightning trainer. This is deprecated and if you still want to log the metrics, try updating to using DeviceStatsMonitor
. See https://pytorch-lightning.readthedocs.io/en/stable/extensions/generated/pytorch_lightning.callbacks.DeviceStatsMonitor.html?highlight=DeviceStatsMonitor#pytorch_lightning.callbacks.DeviceStatsMonitor.
Other options include just taking out that flag and use another bash window to periodically execute nvidia-smi
query yourself. I couldn't find specific examples of why nvidia-smi
would trigger that error but it's likely not associated with GPU OOM.
@jhillhouse92 Thank you, that's about what I figured. I'll just remove that option in the public version, it's only in there to help me pick hparams for each dataset. The best model is usually the biggest one that fits in memory. As long as the model doesn't crash with a clear OOM error on the first backward pass it's fine for most situations... not a super important thing to log.
After 80 epochs (8 hours), I got this error