Open hitorilabs opened 1 year ago
Trying out WebGPU for rendering everything, I think it can actually be a valid option because then you get a lot for free:
New idea: training personalized blendshape vectors -> expression model (i.e. not actually a classifier, just attach png drawings of expressions to some vectors in latent space + search)
This way can save a lot of resources on the device doing the recording and you get a pretty novel "interface" for editing the character expressions. (i.e. make an expression IRL then map it to blendshape vectors automatically - move the vectors around in latent space to tweak it)
I'll start w/ just iOS ARKit face tracking with blendshapes and maybe transforms.
I refuse to touch a mesh at any point, I just want to input a few frames of a consistent character emotions and implement some interpolation between the frames - most likely movement based on some emotion predictor + audio.
I'm surprised more people don't focus on implementing an expressive idle state (real people sitting on a chair are never doing anything interesting)