when I run command of the Code ,the hypothesises are all the null, resulting in a wer of 100%.
and the result is following:
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
[Trie] Trie label number reached limit: 6
0%| | 0/3 [00:00<?, ?it/s]2024-01-07 08:43:53 | INFO | main | HYPO:
2024-01-07 08:43:53 | INFO | main | TARGET:
When I try --w2l-decoder viterbi, it works as the above.
What's your environment?
fairseq Version: main
PyTorch Version: 1.8.1+cu12
OS : Linux ubuntu18.04
How you installed fairseq: source
Build command you used (if compiling from source):
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./
Python version: 3.8
CUDA version: cuda12
GPU models and configuration:
Any other relevant information:
❓ Questions and Help
Before asking:
What is your question?
when I run command of the Code ,the hypothesises are all the null, resulting in a wer of 100%. and the result is following: [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 [Trie] Trie label number reached limit: 6 0%| | 0/3 [00:00<?, ?it/s]2024-01-07 08:43:53 | INFO | main | HYPO: 2024-01-07 08:43:53 | INFO | main | TARGET:
Code
python examples/speech_recognition/infer.py ./outputs/labelsDir --task audio_finetuning --nbest 1 --path ./outputs/fine_tuneModel/checkpoint_best.pt --gen-subset $subset --results-path ./outputs/results/ --w2l-decoder kenlm --lm-model ./kenlmResult/3gram/corpus_cut_word.bin --lm-weight 2 --word-score -1 --sil-weight 0 --criterion ctc --labels ltr --max-tokens 400000 --post-process letter --lexicon ./data/hubert_trail/lexicon/finallexicon.txt
What have you tried?
When I try --w2l-decoder viterbi, it works as the above.
What's your environment?
fairseq Version: main PyTorch Version: 1.8.1+cu12 OS : Linux ubuntu18.04 How you installed fairseq: source Build command you used (if compiling from source): git clone https://github.com/pytorch/fairseq cd fairseq pip install --editable ./ Python version: 3.8 CUDA version: cuda12 GPU models and configuration: Any other relevant information: