NVIDIA / NeMo

A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html
Apache License 2.0
11.84k stars 2.46k forks source link

Citrinet model with LM to reduce the WER for microphone recorded audio #2039

Closed kruthikakr closed 3 years ago

kruthikakr commented 3 years ago

Hi I am using stt_en_citrinet_1024 model and able to get good transcript , I am using the recorded audios with microphone and WER is varying from 3.5% to 15%. This has names of person and place, how to include the words in the model.

any suggestions with following aspects

  1. Preprocessing the audio
  2. using LM model for decoding ( for citrinet is there any implementation in nemo)
  3. Post processing steps say spell correction

looking for inputs .

titu1994 commented 3 years ago

You could finetune Citrinet using the same Tokenizer on the specific domain (if there is sufficient data).

If you have some noise files, noise robust training, the same method as QuartzNet can be applied to Citrinet.

For preprocessing, there should be monochannel 16Khz wav files. We find that attempting signal denoising before inference will generally not do much better, and sometimes will do worse based on the artifacts introduced.

For language modelling with Citrinet (and BPE models in general), we plan to release code snippets to build custom KenLM model and run beam search through similar steps as the offline asr notebook. However there are some significant differences and we have not compiled a clean script for such a task yet. I will try to prioritize that in the coming weeks.

There is also Transformer based rescoring that can further boost offline WER reduction, though that pipeline is not ready yet.

@AlexGrinch is there any ETA (within some months?) that you expect to have the pipeline for transformer based rescoring?

kruthikakr commented 3 years ago

Thanks you for response. We would try to write the script with LM for BPE models . Any inputs or leads are much appreciated.

titu1994 commented 3 years ago

@VahidooX If you have a rough draft, could you create a gist and share here when it's ready ? We can clean it up in the actual PR

kruthikakr commented 3 years ago

Can someone please share some details on this ? waiting for response.

VahidooX commented 3 years ago

Created a PR for adding the feature of training and evaluating n-gram KenLM on top of BPE-base ASR models. It still needs the documentations. https://github.com/NVIDIA/NeMo/pull/2066

VahidooX commented 3 years ago

The PR to support N-gram LM for ASR models is merged : https://github.com/NVIDIA/NeMo/pull/2066 It can do grid search for beam search decoder's hyperparameters to fine-tune them. The scripts support both character-level and BPE-level models. You may read more here: https://github.com/NVIDIA/NeMo/blob/main/docs/source/asr/asr_language_modelling.rst

You need to install the beam search decoders and KenLM to use this feature.

kruthikakr commented 3 years ago

Thank you very much.