-
I'm kinda new to big c++ projects and would like to ask how would i use the following code from [here](https://github.com/TadasBaltrusaitis/OpenFace/wiki/API-calls) to get facial action units:
```
/…
-
Hello guys, I am a Ph.D. scholar working in the domain of Cognitive Psychology at the Indian Institute of Technology Kanpur, Design Programme, India. I would require your help for one my experiment de…
-
How can I find the corresponding locations of 468-d face mesh on the face? e.g., (x, y, z)_1 is on the nose, (x, y, z)_400 is on the left ear, etc.
-
Hi,
I have a file containing facial landmarks, extracted from a video. Its format is the same as the OpenFace output format. I'd like to know whether or not it is possible to use this file as an in…
-
Hi, thanks for your processed data. I want to ask what 35 facial action units correspond to since there are more than 35 facial action units in the FACET iMotions.
https://imotions.com/blog/facial-ac…
-
Remove sub-classes `Facet`, `OpenFace`, `Affdex` and consolidate into a single `Fex` class.
# 4 key features.
1. Detector class: Detects facial landmarks, AUs, emotions, from a face image or vi…
-
### Improvements
- Image are automatically resized to 256×256 now
- No more ugly output: r, g, b are changed to 0 if alpha is 0
-
## Description
Your documentation lists a demo that is using the load_boston dataset to explain how to use the tool [here](https://causalnex.readthedocs.io/en/latest/03_tutorial/sklearn_tutorial.ht…
-
The 3D model is estimated with a person's multiple faces. Then the real-time expression change parameters of the face are estimated based on 3D, and then the expression change parameters are applied t…
-
@runnanzhou - you might have done parts of this process already, but I wanted to document the task in detail for you, it might help. Apologies, I had meant to do this earlier, but am just getting arou…