huggingface / speech-to-speech

Speech To Speech: an effort for an open-sourced and modular GPT4-o
Apache License 2.0
3.52k stars 365 forks source link

[Feature request] How about adding an optional speech to viseme model at the end of our chain? #37

Open fabiocat93 opened 2 months ago

fabiocat93 commented 2 months ago

Hi there,

Thank you so much for your work on this project. It's truly amazing, and I’m excited to see all the innovative tools that people will build based on it. I can already imagine many will integrate your speech-to-speech pipeline with avatar or robot embodiments, where lip sync will be crucial.

To support this, could you help us add functionality to the current flow? The current process includes 1) speech-to-text, 2) LLM, and 3) text-to-speech. I’d like to add a fourth step: either speech-to-viseme or speech-to-text with return_timestamp = "word", followed by manual mapping of words to phonemes, and then to visemes.

Best regards,
Fabio

andimarafioti commented 2 months ago

Hi Fabio! This sounds like a great idea! Would you be open to tackling it?

fabiocat93 commented 2 months ago

Will make some experiments!

Meshwa428 commented 2 months ago

Do you mean something like this one?

https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme

fabiocat93 commented 2 months ago

Do you mean something like this one?

https://huggingface.co/docs/transformers/en/model_doc/wav2vec2_phoneme

yup, the best solution I found for now in terms of quality/latency includes wav2vec2_phoneme followed by a phoneme-to-viseme mapping. i think I can open a PR by the end of the week. there are some more language-specific edge cases that I want to verify

fabiocat93 commented 2 months ago

I followed up with a PR 🤗