hitorilabs / minimo

tiny mocap utilities
MIT License
0 stars 0 forks source link

Hypothetical Design #3

Open hitorilabs opened 1 year ago

hitorilabs commented 1 year ago

I'll start w/ just iOS ARKit face tracking with blendshapes and maybe transforms.

I refuse to touch a mesh at any point, I just want to input a few frames of a consistent character emotions and implement some interpolation between the frames - most likely movement based on some emotion predictor + audio.

I'm surprised more people don't focus on implementing an expressive idle state (real people sitting on a chair are never doing anything interesting)

hitorilabs commented 1 year ago

Trying out WebGPU for rendering everything, I think it can actually be a valid option because then you get a lot for free:

hitorilabs commented 1 year ago

New idea: training personalized blendshape vectors -> expression model (i.e. not actually a classifier, just attach png drawings of expressions to some vectors in latent space + search)

This way can save a lot of resources on the device doing the recording and you get a pretty novel "interface" for editing the character expressions. (i.e. make an expression IRL then map it to blendshape vectors automatically - move the vectors around in latent space to tweak it)