Closed iamxiaoyubei closed 5 years ago
Jasper-Large has 3.61 % greedy on test clean dataset. For more details see our arxiv paper. For other models:ASR WER
Your results are really good!
Are these results obtained using only the librispeech dataset for training? If not, could you tell me which datasets are used in total or how long are they? Thank you!
Only Librispeech.
Would you mind tell me the WER of librispeech datasets(dev-clean/dev-other/test-clean/test-other/or both of them?) for ASR task? Thank you~