TadasBaltrusaitis / OpenFace

OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.
Other
6.86k stars 1.84k forks source link

Several questions #227

Open Bilabong67 opened 7 years ago

Bilabong67 commented 7 years ago

Hi,

First of all, thanks a lot for sharing and continuing to improve OpenFace!

I have just started using it, and had several questions:

  1. Is it possible to dynamically display Action Units on a video, so that it appears over the video (or at least in a box next to the video) frame-by-frame? This would be great to make sanity checks and see very easily which AU or emotions are correctly detected

  2. Is there a way to code for unilateral AU 12 and 14? I am asking this because I am trying to use AU to deduce and display emotions, and those two unilateral AU are needed to detect Contempt

  3. OpenFace has been trained on several datasets for AU. Is there a way to easily train OF on a new one, without having to re download the initial datasets used, to incrementally improve its accuracy?

  4. Can Multiple faces in one video be analysed for AU?

Many thanks!

TadasBaltrusaitis commented 7 years ago

Hi,

Glad you're finding OpenFace useful, to answer your questions:

  1. Not at the moment, I'm still working on a GUI that will allow to do that (on Windows OS).
  2. Unfortunately, there are very few datasets with that information labeled, so the models are trained for bilateral AUs.
  3. Yes, however, it would not incrementally improve the accuracy but would result in new models that you could use in OpenFace, for training AUs have a look at - https://github.com/TadasBaltrusaitis/OpenFace/wiki/Action-Units
  4. Not at the moment, the issue comes from the system performing person specific normalization and if people are moving around or switching places in a multi face setting that would cause issues.

Thanks, Tadas

Bilabong67 commented 7 years ago

Thank you Tadas!

  1. Regarding GUI, do you by any chance have an expected release date in mind?

  2. For coding Contempt, do you think facial landmarks could be used to deduce unilateral movements? E.g. "Consider Contempt present if landmark 49 is X% higher than landmark 55", i.e. by using lip corners landmarks?

  3. Thank you - I was asking this as the tutorial was saying that we should have the datasets mentionned to perform the training. Now I understand you provided the corresponding data extraction to use directly without having to re download the datasets :)

  4. Just out of curiosity: as OpenFace can recognise people identities, couldn't this feature be used to follow people and perform person specific normalisation in multiface settings for AU coding?

TadasBaltrusaitis commented 7 years ago

Hi,

To answer your questions:

  1. I don't have a specific data, but I'm really hoping to have a basic version of the GUI out by the end of the summer.
  2. It might be possible, especially for frontal faces. However, this becomes a bit trickier when people are not looking at the camera. If you have a dataset of contempt examples you could also train a classifier that uses facial landmarks and the appearance of the image
  3. This OpenFace can't recognize people's identities, but there is another project (also called OpenFace) that can. Combining them would allow for person specific AU coding in a multi-person setup.

Thanks, Tadas