Closed enzyme69 closed 6 years ago
Brain overdrives...
I am currently studying Face Animation Mesh using iPhone X. It's giving out this streaming data of facial changes that supposedly the weighted data is representing blend shape coefficient of face, and although simplified, we can think of a more or less representing muscle of face, can be clearly seen maybe if I am using Tension Color to visualize the changes in during certain emotions.
We can perhaps think of this like when our brain is glowing based on emotion.
I am trying to visualize and study the data.
I wonder if I could get 100 people to emote doing 7 basic emotions: happy, sad, surprise, disgust, angry, neutral, fear.... and then compare the color of their blendshapes.
good interests. All that in python is slow. For coordinates we can handle hilbert curve division of image to fix the origins, but to do it you have to rewrite image to hilbert manner, i guess. Anyway not sverchok currently.
Needs help in understanding this.
The basic driver and driven, is connecting 2 attributes. This is easy.
From Sverchok, I often see how vectors in 2D or 3D space can magically translate into textures. Which means data in space can trigger something.... by pattern.
Let supposed I have an array of 2D image of a heads at different orientation, what would be best mechanism that trigger and pick an image based on closest 3d cursor position to the image?
Also I am curious to understand, if we have 51 shapes defining expression faces, each have 0-1 coefficients, supposed we have some datas, for those 51 shapes. Then there is some kind of patterns that define a certain threshold to define emotions, how do we control and drive in term of this complex case?
Maybe same case like when I draw certain symbol with grease pencil, I want it to trigger a meaningful something.
Perhaps at some point we can have machine learning....