Closed lissyx closed 4 years ago
Re-exporting the model with different TFLite conversion parameters does not change.
Running 0.6.0a15 model with 0.6.0 binaries yields ""good"" results, as exposed above. Running 0.6.0 model with 0.6.0a15 yields ""bad"" results. I think this advocates that something in the model changed and TFLite export impacts it :/.
I'm wondering if (and if, why) we would need adjustements on the LM weights for TFLite:
$ ~/tmp/deepspeech/0.6.0/tflite/deepspeech --model ~/tmp/deepspeech/0.6.0/eng/en-us/output_graph.tflite --lm limited_lm.binary --trie limited_lm.trie --audio deepspeech_dump_all.wav --lm_alpha 2.0 --lm_beta 1.0 -t
TensorFlow: v1.14.0-21-ge77504a
DeepSpeech: v0.6.0-0-g6d43e21
INFO: Initialized TensorFlow Lite runtime.
turn the bedroom light on
cpu_time_overall=11.44250
$ ~/tmp/deepspeech/0.6.0/tflite/deepspeech --model ~/tmp/deepspeech/0.6.0/eng/en-us/output_graph.tflite --lm limited_lm.binary --trie limited_lm.trie --audio deepspeech_dump_all.wav -t
TensorFlow: v1.14.0-21-ge77504a
DeepSpeech: v0.6.0-0-g6d43e21
INFO: Initialized TensorFlow Lite runtime.
on the bedroom light on
cpu_time_overall=11.42704
As @reuben suggested on IRC, this might be a side-effect of CUDNN and incomplete testing on my side for TFLite model.
In the meantime of a new release, re-exporting the model from checkpoint with patch from #2613 should work.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Not sure exactly why, but it seems reported by people. https://discourse.mozilla.org/t/android-project-with-pb-files-instead-of-tflite/50550/17
Checking myself for IoT demo: There's definitively something going on here ...
And without LM:
And re-using
v0.6.0-alpha.15
, which is 0.5.1 re-exported:Audio was recorded from Firefox Nightly, on RPi4 via WebSocket.