Open fabiocat93 opened 2 months ago
Hi Fabio! This sounds like a great idea! Would you be open to tackling it?
Will make some experiments!
Do you mean something like this one?
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme
Do you mean something like this one?
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme
yup, the best solution I found for now in terms of quality/latency includes wav2vec2_phoneme followed by a phoneme-to-viseme mapping. i think I can open a PR by the end of the week. there are some more language-specific edge cases that I want to verify
I followed up with a PR 🤗
Hi there,
Thank you so much for your work on this project. It's truly amazing, and I’m excited to see all the innovative tools that people will build based on it. I can already imagine many will integrate your speech-to-speech pipeline with avatar or robot embodiments, where lip sync will be crucial.
To support this, could you help us add functionality to the current flow? The current process includes 1) speech-to-text, 2) LLM, and 3) text-to-speech. I’d like to add a fourth step: either speech-to-viseme or speech-to-text with
return_timestamp = "word"
, followed by manual mapping of words to phonemes, and then to visemes.Best regards,
Fabio