Closed philippbb closed 3 years ago
Ah sorry i think i missunderstood something. FACSvatar needs a webcam or somekind of source to create the AU base and its not creating AU's itself based on emotion labels or something.
@philippbb Currently it has been focusing on animating (real-time) FACS input or responding to it. The process_facsdnnfacs
module for example takes FACS input, puts it through a DNN, and then generates new FACS values in response.
However, the OpenFace input module is just that, a module. If this is a project you're working on, or find another project that turns emotion labels into FACS values, you can still use the rest of the framework for the animation part.
The only thing required is to send a dict with FACS values from your new module to the process_bridge
module.
Hi
In the description you wrote:
"Deep Neural Network generation of facial expressions for Human-Agent Interaction (See modules/process_facsdnnfacs)"
I just quickly checked out the module. How is it supposed to work for dynamic facial expression?
I can input emotion label or corresponding Facs list and it outputs me a new more dynamic and "realistic" AU dict kinda realtime?
Thank you for an answer