Closed akademi4eg closed 7 years ago
Yep, this is probably caused by not enough data, so like @liaoweiguo said, see issue #6 for alternative datasets that we provide.
Thanks a lot! I'll try with larger datasets. Should the validation set remain the same? In #6 it is mentioned that train set is sampled from LibreSpeech train. Is it safe to assume that test is sampled from LibreSpeech test?
The 5 hours of default train/test data is taken from the Librispeech "dev clean". It is disjoint (no shared data points) with these other datasets in #6. So I would change the training set to be one of the big training sets, and use one of the following for the new validation:
speech.yml
Kurfile, you currently have:train:
data:
- speech_recognition:
<<: *data
url: "https://kur.deepgram.com/data/lsdc-train.tar.gz"
checksum: >-
fc414bccf4de3964f895eaa9d0e245ea28810a94be3079b55505cf0eb1644f94
# ...
validate: &validate
data:
- speech_recognition:
<<: *data
url: "https://kur.deepgram.com/data/lsdc-test.tar.gz"
checksum: >-
e1c8cf9cd57e8c1ae952b6e4e40dcb5c8e3932c81ecd52c090e4a05c8ebbea2b
And you can change it to something like this:
train:
data:
- speech_recognition:
<<: *data
url: "http://kur.deepgram.com/data/lsc100-100p-train.tar.gz"
checksum: >-
cad3d2aa735d50d4ddb051fd8455f2dd7625ba0bb1c7dd1528da171a10f4fe86
# ...
validate: &validate
data:
- speech_recognition:
<<: *data
url: "https://kur.deepgram.com/data/lsdc-test.tar.gz"
checksum: >-
e1c8cf9cd57e8c1ae952b6e4e40dcb5c8e3932c81ecd52c090e4a05c8ebbea2b
- speech_recognition:
<<: *data
url: "https://kur.deepgram.com/data/lsdc-train.tar.gz"
checksum: >-
fc414bccf4de3964f895eaa9d0e245ea28810a94be3079b55505cf0eb1644f94
Note that if you use multiple data suppliers, I would recommend using a bleeding-edge install of Kur. There were a couple of bugs we fixed up related to this since the last official release on PyPI.
Hi guys! I'm trying to train speech example. According to your blog post it should learn in about 2 days (by the way on what hardware?). I'm training on Amazon p2.xlarge. After 20 hours train loss droped down to 6-7, yet validation loss just keeps growing (it is already above 770!). I find it rather weird. Here is history of validation losses: array([ 434.28509521, 317.38632202, 309.28720093, 312.42532349, 313.19784546, 324.18057251, 343.06167603, 349.9078064 , 381.07891846, 386.22546387, 405.9574585 , 417.4850769 , 424.54220581, 456.63919067, 461.6697998 , 476.76068115, 485.60809326, 485.19570923, 503.05889893, 506.81793213, 518.48876953, 514.61688232, 554.96228027, 564.0111084 , 574.05865479, 566.97747803, 589.06658936, 585.39562988, 600.09417725, 590.19805908, 617.77484131, 607.53161621, 637.23944092, 628.73828125, 623.25372314, 640.79986572, 637.19677734, 669.72247314, 630.89300537, 682.78503418, 677.91912842, 682.71673584, 674.5670166 , 687.17169189, 654.59844971, 705.17877197, 731.55621338, 704.7802124 , 687.76977539, 732.31542969, 747.31225586, 691.43365479, 724.47790527, 739.60638428, 731.09661865, 753.55401611, 751.44567871, 751.18878174, 777.97332764, 715.65222168, 768.37219238, 711.67407227, 757.25933838, 774.5269165 , 786.63122559], dtype=float32)
This is trained on current master. I'll try to train on pip version of kur, maybe there would be a difference.