burchim / EfficientConformer

[ASRU 2021] Efficient Conformer: Progressive Downsampling and Grouped Attention for Automatic Speech Recognition
https://arxiv.org/abs/2109.01163
Apache License 2.0
210 stars 32 forks source link

The LM model expected is at word level or at token level? #13

Closed kafan1986 closed 2 years ago

kafan1986 commented 2 years ago

I wanted to confirm whether the LM model is expected to be at word level or token level? Usually KenLM model is trained at word level and in our case, we are using tokenizer (n=1000), and should I need to train it at token level or word level?

burchim commented 2 years ago

Hi,

The LM used for rescoring should have the same encoding as the Conformer model. We used the NVIDIA NeMo toolkit to train a token level 6-gram for our models: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html

The trick is to tokenize the training corpus using the corresponding bpe tokenizer and then to replace each token by a special character to create a new corpus. This new corpus can be used to train a bpe n-gram: https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/ngram_lm/train_kenlm.py

I added the missing 6-gram to the shared folders if you would like to recover the paper results. You should be able to access it here.

Best, Maxime