NVIDIA / NeMo

A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html
Apache License 2.0
11.49k stars 2.41k forks source link

Support required for fine tuning cache aware streaming model #9027

Closed rkchamp25 closed 3 months ago

rkchamp25 commented 4 months ago

Hi I want to fine tune "stt_en_fastconformer_hybrid_large_streaming_multi" model on my custom data. I want to know some best practices that we can follow to fine tune tune cache aware streaming model.

  1. I am using audio of fixed length (2s). Is this good? Can I have audios' of different lengths? Total duration of audio required to finetune on a dataset of different domain (Medical Data)?
  2. Which tokenizer to use? Should we finetune using custom tokenizer which will be created with new data or should we fine tune using the default tokenizer and just with new audio?
  3. How can I make this model work with a different language? Can I fine tune this model directly on audio of different language for eg Spansh audio? Or how can we use this on different language?
  4. How to resume training for this model because I cannot train in one go? If I finetune using NeMo/examples/asr/speech_to_text_finetune.py?
  5. Should I use speech_to_text_finetune.py or speech_to_text_hybrid_rnnt_ctc_bpe.py, I want to try out with old vocabulary as well as new one and I want to stop and continue training multiple times.
github-actions[bot] commented 3 months ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] commented 3 months ago

This issue was closed because it has been inactive for 7 days since being marked as stale.