Open juanmc2005 opened 1 year ago
Is this feature considered implemented?
@BlokusPokus it seemed to work last time I tried but I didn't merge because I wanted to include a faster implementation of Whisper and I needed to clean up the code. Feel free to try it out but it's a pretty old version of the library. I need to find some time to update this PR. If you feel like it, it would be an amazing contribution!
Yeah we definitely need a faster-whisper / WhisperLive implementation. WhisperLive also integrated VAD and I see it has some overlapping features.
Depends on #143
Adding a streaming ASR pipeline needed a big refactoring (that began with #143). This PR continues this effort to allow a new type of pipeline that transcribes speech instead of segmenting it. A default ASR model based on Whisper is provided, but the dependency is not mandatory.
Additional modifications were also needed to make Whisper compatible with batched inference. Note that we do not condition Whisper on previous transcriptions here. I expected this to degrade transcription quality but I found it rather robust in my experiments with the microphone and spontaneous speech in various languages (English, Spanish and French).
The new
Transcription
pipeline can also use a segmentation model as a local VAD to skip non-voiced chunks. In my experiments, this worked better and faster than using Whisper'sno_speech_prob
.Transcription
is also compatible withdiart.stream
,diart.benchmark
,diart.tune
anddiart.serve
(hencediart.client
too).Still missing
Changelog
TBD