m-bain / whisperX

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
BSD 2-Clause "Simplified" License
10.07k stars 1.05k forks source link

Would it be possible to use VOSK instead of wav2vec in order to force alignment? #463

Open BlueNebulaDev opened 10 months ago

BlueNebulaDev commented 10 months ago

I've tried to use whisperX to get accurate timestamps for some speech. It's definitely a big improvement over Whisper's output, but it's still far from ideal.

Before trying Whisper I've been playing with VOSK and its ability to timestamp words is impeccable. Unfortunately it's not as accurate in understanding speech.

I'm wondering whether it would be possible to use VOSK as a backend for WhisperX. The idea would be somewhat simple: we transcribe the audio file using both VOSK and Whisper. We map the words output by the two tools with each other. Then we keep Whisper's words with VOSK's timestamp. When VOSK and Whisper agree on the transcription the task is easy. It's much harder when the outputs differ, but since the outputs are both sorted, it shouldn't be too hard to craft some good heuristics that take into consideration both tools' timestamps plus the phonemes of the words detected by the two tools to decide which words to map and which ones to drop.

Would something like this fit in the scope of this project, or should I create a brand new project for this?

finnnnnnnnnnnnnnnnn commented 10 months ago

VOSK has an aligner branch, might be tricky to get working though.

https://github.com/alphacep/vosk-api/pull/756