OpenNMT / OpenNMT-py

Open Source Neural Machine Translation and (Large) Language Models in PyTorch
https://opennmt.net/
MIT License
6.67k stars 2.24k forks source link

Fixed OOM list out of range bug #2550

Open alexis-allemann opened 5 months ago

alexis-allemann commented 5 months ago

Issue #2549

vince62s commented 5 months ago

Thanks! I get your point but the only risk is to go with many OOM without realizing. What were the circumstances under which you faced all of this?

alexis-allemann commented 5 months ago

I encountered this error during a standard experiment when training a translation model. In my experiment, I had set a batch size that almost completely filled the memory of my GPUs. After a few steps (about a thousand), a batch triggered an OOM error on one of my GPUs, causing the training process to abort. It's not very practical to interrupt the training process due to an infrequent OOM memory error.

I've made an update to my pull request to try and recalculate the gradients with a new batch, which seems to be a better approach than just filling the tensors with zeros. Perhaps we could consider adding an option to specify an allowed number of attempts before terminating the learning process? For example, opt.max_oom_batch_retries. What do you think about this?

vince62s commented 5 months ago

you can try this approach but then run it with a knowingly batch size that is too big to trigger OOM, I think it is not bullet proof and will trigger exceptions, just saying but interested to know. NB are you using sentence or token batch sizes?

alexis-allemann commented 5 months ago

You're right, I've just tried it and it doesn't seem such a good idea because a timeout now occurs on the torch.distributed.all_gather method because a deadlock occurs between the processes... To answer your other question, I use token batch sizes.