Closed pasqui23 closed 1 year ago
Is there text documentation regarding this? It also looks like the paper is locked behind a paywall.
Version without paywall. No code that I could find but the high level architecture is described nicely enough in the paper.
Okay I think I understand what the paper is talking about.
Basically, it uses facial expression recognition to not only switch the avatar's facial expression but also trigger a pre-baked animation as well.
This could be interesting to implement once I get back around to implementing expression recognition.
On 27/08/22 20:33, Timothy Yuen @.***> wrote:
Okay I think I understand what the paper is talking about.
Basically, it uses facial expression recognition to not only switch the avatar's facial expression but also trigger a pre-baked animation as well.
This could be interesting to implement once I get back around to implementing expression recognition.
It also include "persona parameters" that plays idle animations and alter the avatar posture depending on their value.
Sure, that's a fairly common thing to emulate. I believe VTube Studio does that by default
In AlterEcho (video presentation) a system to improve vtuber animation is described, including a fairly detailed architectural diagram.
At high level it consist of a set of animations that could get played based on both the streamer's emotion (so an alertness blink would play when surprised for example) and based on several "persona parameters" so that a more energetic persona would fidget more or a more extroverted persona would have a straighter and more confident pose.
It all combines in a system the authors mention has been mistaken for actual full body mocap.