These are various examples on how to use or integrate DeepSpeech using our packages.
It is a good way to just try out DeepSpeech before learning how it works in detail, as well as a source of inspiration for ways you can integrate it into your application or solve common tasks like voice activity detection (VAD) or microphone streaming.
Contributions are welcome!
Note: These examples target DeepSpeech 0.9.x only. If you're using a different release, you need to go to the corresponding branch for the release:
v0.9.x <https://github.com/mozilla/DeepSpeech-examples/tree/r0.9>
_v0.8.x <https://github.com/mozilla/DeepSpeech-examples/tree/r0.8>
_v0.7.x <https://github.com/mozilla/DeepSpeech-examples/tree/r0.7>
_v0.6.x <https://github.com/mozilla/DeepSpeech-examples/tree/r0.6>
_master branch <https://github.com/mozilla/DeepSpeech-examples/tree/master>
_List of examples
Microphone VAD streaming <mic_vad_streaming/README.rst>
_VAD transcriber <vad_transcriber/>
_AutoSub <autosub/>
_FFMPEG VAD streaming <ffmpeg_vad_streaming/README.MD>
_Node.JS microphone VAD streaming <nodejs_mic_vad_streaming/Readme.md>
_Node.JS wav <nodejs_wav/Readme.md>
_Web Microphone Websocket streaming <web_microphone_websocket/Readme.md>
_Electron wav transcriber <electron/Readme.md>
_.NET framework <net_framework/>
_Universal Windows Platform (UWP) <uwp/>
_.mozilla/androidspeech library <https://github.com/mozilla/androidspeech/>
_nim_mic_vad_streaming <nim_mic_vad_streaming/README.md>
_.