juanmc2005 / diart

A python package to build AI-powered real-time audio applications
https://diart.readthedocs.io
MIT License
1.1k stars 90 forks source link

Speaker-blind speech recognition #144

Open juanmc2005 opened 1 year ago

juanmc2005 commented 1 year ago

Depends on #143

Adding a streaming ASR pipeline needed a big refactoring (that began with #143). This PR continues this effort to allow a new type of pipeline that transcribes speech instead of segmenting it. A default ASR model based on Whisper is provided, but the dependency is not mandatory.

Additional modifications were also needed to make Whisper compatible with batched inference. Note that we do not condition Whisper on previous transcriptions here. I expected this to degrade transcription quality but I found it rather robust in my experiments with the microphone and spontaneous speech in various languages (English, Spanish and French).

The new Transcription pipeline can also use a segmentation model as a local VAD to skip non-voiced chunks. In my experiments, this worked better and faster than using Whisper's no_speech_prob.

Transcription is also compatible with diart.stream, diart.benchmark, diart.tune and diart.serve (hence diart.client too).

Still missing

Changelog

TBD

BlokusPokus commented 7 months ago

Is this feature considered implemented?

juanmc2005 commented 7 months ago

@BlokusPokus it seemed to work last time I tried but I didn't merge because I wanted to include a faster implementation of Whisper and I needed to clean up the code. Feel free to try it out but it's a pretty old version of the library. I need to find some time to update this PR. If you feel like it, it would be an amazing contribution!

GeorgeDeac commented 1 month ago

Yeah we definitely need a faster-whisper / WhisperLive implementation. WhisperLive also integrated VAD and I see it has some overlapping features.