Open yfyeung opened 1 year ago
Hi, have you got any results with phone based models? I previously tried with librispeech and the result was worse than BPE. For pruned transducer I only got 4-5 WER for test-clean.
I tried the pruned transducer on gigaspeech M, and the result was worse than BPE too.
unit level | dev & test | lm | ngram-lm-scale | ppl | checkpoint |
---|---|---|---|---|---|
phone 76 | 13.15 & 13.46 | 3gram_pruned_1e8 | 0.235 | 192.176 & 213.068 | epoch 30 avg 7 |
bpe 500 | 12.88 & 12.87 | - | - | - | epoch 30 avg 8 |
Thanks! But your result seems very close. Will try your recipe on librispeech sometime.
Maybe sometime later. Not recently.
@yfyeung can you update and merge this PR?
Hi, have you got any results with phone based models? I previously tried with librispeech and the result was worse than BPE. For pruned transducer I only got 4-5 WER for test-clean.