Open dhoore123 opened 2 weeks ago
I have the same problem.
Can you also set max_steps
to something else than -1
? E.g. 100000
. Let us know if this helps.
Setting max_steps as suggested seems to do the trick. Training now runs. Thanks! I'll close the ticket once I see some epochs completing successfully.
I finally got a training running for a few (pseudo-)epochs now. Even though I am running on 2 80GB GPUs, I had to tune down the batch_duration to 750, with batch_size removed from the configuration. The GPU ran out of RAM with higher values. I did not expect this as the example in the nvidia docs suggests using a batch_duration of 1100 for a 32GB GPU.
I had to tune down the batch_duration to 750, with batch_size removed from the configuration.
It seems that your actual batch sizes became larger after removing batch_size constraint, leading to this outcome. This is a net benefit - despite decreasing batch_duration, you are still enjoying larger batch sizes.
I did not expect this as the example in the nvidia docs suggests using a batch_duration of 1100 for a 32GB GPU.
The maximum possible batch_duration setting is determined by several factors:
The setting of 1100s was specific to FastConformer-L CTC+RNN-T trained on ASRSet 3. It is expected that with a different model, data, objective function, etc. you may need to tune it again. I am hoping to simplify the tuning process in the future.
Thanks for your reply, pzelasko. It reassures me that this batch_duration value does not seem odd to you, and does not point to something I did wrong. On a different note: the effective batch size is normally defined by batch_size x accumulate_grad_batches (or fused_batch_size in case of hybrid training?) x nr_of_gpus. This causes the number of steps per epoch to be a function of the number of GPUs. When using lhotse, the number of steps in a "pseudo" epoch looks to be the same, independent of the number of GPUs. Does this mean that the amount of data seen in one "pseudo" epoch depends on the number of GPUs one uses, or is lhotse spreading the same amount of data over fewer effective batches when running on more GPUs with each step?
It means that if you keep the “pseudoepoch” size constant, the amount of data seen during a “pseudoepoch” is proportional to the number of GPUs. Generally I don’t encourage thinking in epochs in this flavor of data loading, the only thing that counts is the number of updates. And yeah the total batch duration is the product of num GPUs, batch duration, and grad accumulation factor.
I am trying to use lhotse when training a hybrid fast conformer model. The error is: File "/usr/local/lib/python3.10/dist-packages/nemo/core/optim/lr_scheduler.py", line 870, in prepare_lr_scheduler num_samples = len(train_dataloader.dataset) TypeError: object of type 'LhotseSpeechToTextBpeDataset' has no len()
The motivation is that I want to make use of the dynamic batching and capability to equally weigh several languages for the training of my multilingual hybrid fast conformer model, which the lhotse integration is advertised to provide.
I am using a singularity container, built from the nemo-24.01 docker container of nvidia. I also tried nemo-24.05 with the same result. This all runs in a slurm environment using multiple GPUs on a single node, on an on-premises grid. I zipped and attached my yaml configuration file. When not using lhotse, the config works. "not using lhotse" means setting use_lhotse to false and commenting out the following three lhotse related lines in the trainer use_distributed_sampler: false limit_train_batches: 20000 val_check_interval: 20000
The error suggests that there could be something missing in the code somewhere (method len not implemented for class LhotseSpeechToTextBpeDataset?), which would suggest some incomplete integration of lhotse for my use case. If you would find something missing or incorrect in my config, I would be happy to learn. I am not in a position to share the data itself nor parts of it though, hoping the error message rings a bell what could be wrong here.
=======================
FastConformer-Hybrid-Transducer-CTC-BPE-Streaming-multi-60-lhotse.zip