Open mengmeng666789 opened 6 years ago
Hello, the WER interpretation is a bit different, use Levenshtein's distance. Look:
The WER is defined as the editing / Levenshtein distance on word level divided by the amount of words in the original text. In case of the original having more words (N) than the result and bothbeing totally different (all N words resulting in 1 edit operation each), the WER will always be 1 (N / N = 1).
See this interpretation: https://martin-thoma.com/word-error-rate-calculation/
you can use this: https://github.com/zszyellow/WER-in-python
or this implementation: https://github.com/belambert/asr-evaluation
by a curiosity are you using which model? ds2_gru_model has some implementation problems, example: https://github.com/robmsmt/KerasDeepSpeech/issues/7. I recommend using the repository: https://github.com/reith/deepspeech-playground/ which is an updated fork of the official implementation of Baidu deepspeech2.
Regards, Edresson Casanova.
hi sir I do not change anything, I train with this commend ‘python run-train.py --train_file data/TIMIT/timit_train.csv --valid_files data/TIMIT/timit_test.csv --model_arch=3 --opt=adam --batchsize=32 --loadcheckpointpath checkpoints/epoch/LER-WER-best-DS3_2018-10-20_19-39/’ I run test with this commend python run-test.py --test_files data/TIMIT/timit_test.csv --loadcheckpointpath checkpoints/epoch/LER-WER-best-DS3_2018-10-20_19-39/’ then this is my result ‘’’ Test WER average is :0.75 Test LER average is :18.39 Test normalized LER is :0.39 ‘’’ I think it is very high and suspect Do you know what is wrong with me?
And what is your result? Need you help
Thank you
hi,I want to know why the WER is 0.8? I use the default parameters using TIMIT,shave I do something wrong???