Closed FurkanKambay closed 1 year ago
I guess it may be better to transform speech sound into animation. http://www.yisongyue.com/publications/siggraph2017_speech.pdf
Like you mentioned, only the part about lips and tongue would be visible in the case of a videogame.
Otherwise, for scholarly purposes I think it's a very useful idea to represent these in an animated diagram. When studying phonetics, I've seen courses where the course material includes animations--no idea where they got them from.
Apart from that, and for scholarly / fun purposes, there's this app:
well, nvidia did it
Link?
@naturallymitchell It's a part of their new "Omniverse Machinima" tool which isn't available right now but you can register for beta here: https://www.nvidia.com/en-us/geforce/news/omniverse-machinima/
They're calling the tech Audio2Face (which I guess is not the same thing with what I suggested; I might've jumped the gun while watching the live stream - but I guess this way might be even better since not everyone speaks the exact same way – accents and such – and those differences should be reflected in the mouth animations)
I don't know if this is worthwhile but it was in my "Draft Ideas" note page so here goes nothing.
This is for video games when characters talk. Their mouths will move according to IPA so it will look realistic.
These will probably give you an idea of what's in my head:
So what I'm imagining is like this:
Dialog:
Hello there
Translated into IPA:həˈloʊ ðɛr
(we haven't used this tool I'm talking about yet) Translated into animation of the character's mouth with the tool.I suppose only the lips' and tongue's animation would be enough.
Do you think this is stupid? Is it already done or being worked on? Or if you need more detail, let me know.