alphacep / vosk-api

Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
Apache License 2.0
8.15k stars 1.12k forks source link

Is it possible to get the timing of phonemes, instead of full words? #687

Open tscizzlebg opened 3 years ago

tscizzlebg commented 3 years ago

I searched for docs, or docstrings in source code, but couldn't find a nice summary of what the options for output were, so figured I'd ask here and it might be a super quick answer.

(Apologies if this is not the right place for questions. I posted on StackOverflow as well, but the vosk tag doesn't have that many total questions so I wasn't sure what y'all prefer.)

nshmyrev commented 3 years ago

We do not support phones yet. There is a pull request though

https://github.com/alphacep/vosk-api/pull/528

I posted on StackOverflow as well, but the vosk tag doesn't have that many total questions so I wasn't sure what y'all prefer.

Some time ago Stackoverflow denied me to answer Vosk questions there. So I left it altogether.

tscizzlebg commented 3 years ago

Cool, thanks @nshmyrev ! I'm definitely looking forward to that PR getting in.

For getting more into the nitty-gritty of speech, and trying to create training sets for speech decoding models (as opposed to what I'm guessing are the more mainstream use cases of subtitling videos and stuff like that), output by phone is key.

Re StackOverflow, that's too bad. Good to know.

nshmyrev commented 3 years ago

trying to create training sets for speech decoding models (as opposed to what I'm guessing are the more mainstream use cases of subtitling videos and stuff like that), output by phone is key.

What are "speech decoding models" exactly? Could you please clarify?

tscizzlebg commented 3 years ago

Ah. For decoding intended speech from neural activity.

Here's an example of research toward restoring the communication ability of people with severe paralysis: http://changlab.ucsf.edu/s/anumanchipalli_chartier_2019.pdf

Shallowmallow commented 3 years ago

Shouldn't it be possible to use make a model that recognizes all phones. Like this for example https://github.com/xinjli/allosaurus ?

madhephaestus commented 1 year ago

For anyone looking for a Java lip-sync software based on vosk, i have a small staand alone example for you! https://github.com/madhephaestus/TextToSpeechASDRTest.git I was able to use the partial results with the word timing to calculate the timing of the phonemems (after looking up the phonemes in a phoneme dictionary). I then down-mapped the phonemes to viseme and stored the visemes in a list with timestamps. THe timestamped visemes process in a static 200ms, and then the audio can begin playing with the mouth movemets synchronized precisly with the phoneme start times precomputed ahead of time. This is compaired to Rubarb which takes as long to run as the audio file is long.