AffectAnalysisGroup / AFARtoolbox

AFAR: A Deep Learning Based Toolbox for Automated Facial Affect Recognition
Other
79 stars 25 forks source link

Update :exclamation:

This version of AFAR uses MATLAB, a newer version for Python is now available. The Python version is named PyAFAR.

Try our new, improved, faster and dependency-free GUI version-PyAFAR.

MATLAB AFAR is still functional, however, MATLAB AFAR is no longer being maintained. PyAFAR replaces MATLAB AFAR.


Automated Facial Affect Recognition (AFAR)

Automated measurement of face and head dynamics, detection of facial action units and expression, and affect detection are crucial to multiple domains (e.g., health, education, entertainment). Commercial tools are available but costly and of unknown validity. Open-source ones lack user-friendly GUI for use by non-programmers. For both types, evidence of domain transfer and options for retraining for use in new domains typically are lacking. Deep approaches have two key advantages. They typically outperform shallow ones for facial affect recognition. And pre-trained models provided by deep approaches can be fine tuned with new datasets to optimize performance. AFAR is an open-source, deep-learning based, user-friendly tool for automated facial affect recognition. It consists of a pipeline having four components: (i) face tracking, ii) face registration, (iii) action unit (AU) detection and (iv) visualization. Moreover, finetuning component allows the interested users to finetune the pretrained AU detection models with their own dataset. AFAR has been used in comparative studies of action unit detectors [1], [2] and to investigate cross-domain generalizability [3], assess treatment response to deep brain stimulation (DBS) for treatment-resistant obsessive compulsive disorder [4], and to explore facial dynamics in young children [5] and in adults in treatment for depression [6] among other research.

afar_pipeline

Required

Last validated system configuration

Run sample_afar_v1.m to validate the end-to-end working of the toolbox. Makesure to set the paths to include AFARtoolbox and mexopencv dependencies before you run.

Modules

ZFace

Folder structure:

Using the SDK The SDK is organized into classes. The CZFace class (located in \ZFace_Src) is the central way we use the tracker. We can simply create an instance of the tracker:

zf = CZFace('.\ZFace_models\zf_ctrl49_mesh512.mat');

And track a given image "I":

[ ctrl2D, mesh2D, mesh3D, pars ] = zf.Fit( I, [] );

The current version of ZFace has the following output:

For more details, please refer to the included demo files:

The SDK uses the mexopencv wrapper (https://github.com/kyamagu/mexopencv). It has been compiled with the 64 bit OpenCV 2.4.11 for 64bit Windows. The dlls are in the ".\opencv_2.4.11_x64_vc11_dlls\" folder. They have to be included in the system path (start menu: environment variables -> sytem variables -> path).

Troubleshooting If you are getting an "Invalid MEX-file" error from the mexopencv wrapper, try to re-compile it:

mexopencv.make("clean",true)
mexopencv.make

FETA

AU Detector

This code uses output of FETA, which is run with the following parameters:

You can run the code on the sample video sample_video_norm.mp4 of size (200 x 200 x 3)

Probabilities of 12 AUs are saved to the file named sample_video_norm_result.mat

These AUs are: AU1, AU2, AU4, AU6, AU7, AU10, AU12, AU14, AU15, AU17, AU23 and AU24.

AFAR Finetune

AFAR Finetune module is written in PyTorch. With this module, you can finetune pretrained AU detector models trained on EB+ dataset. You can also obtain AU probabilities for frames/videos. Finetuning code and models can be found in AFAR_finetune/codes. You can run the following command:

python afar_finetune.py

We share two types of pretrained models:

To finetune:

To test a video:

To test your dataset using frames and a .txt file containing the paths and names of frames:

Run the example code:

Citation

If you use any of the resources provided on this page, please cite the pipeline paper and papers relevant to the components you used:

Pipeline:

@inproceedings{ertugrul2019afar,
  title={AFAR: A Deep Learning Based Tool for Automated Facial Affect Recognition},
  author={Onal Ertugrul, Itir and Jeni, L{\'a}szl{\'o} A and Ding, Wanqiao and Cohn, Jeffrey F},
  booktitle={2019 14th IEEE International Conference on Automatic Face \& Gesture Recognition (FG 2019)},
  year={2019},
  organization={IEEE}
}

AU detector and AFAR finetune:

@inproceedings{ertugrul2019cross,
  title={Cross-domain AU Detection: Domains, Learning Approaches, and Measures},
  author={Onal Ertugrul, Itir and Cohn, Jeffrey F and Jeni, L{\'a}szl{\'o} A and Zhang, Zheng and Yin, Lijun and Ji, Qiang},
  booktitle={2019 14th IEEE International Conference on Automatic Face \& Gesture Recognition (FG 2019)},
  year={2019},
  organization={IEEE}
}

ZFace:

@article{jeni2017dense,
  title={Dense 3d face alignment from 2d video for real-time use},
  author={Jeni, L{\'a}szl{\'o} A and Cohn, Jeffrey F and Kanade, Takeo},
  journal={Image and Vision Computing},
  volume={58},
  pages={13--24},
  year={2017},
  publisher={Elsevier}
}

@inproceedings{jeni2015dense,
  title={Dense 3D face alignment from 2D videos in real-time},
  author={Jeni, L{\'a}szl{\'o} A and Cohn, Jeffrey F and Kanade, Takeo},
  booktitle={2015 11th IEEE international conference and workshops on automatic face and gesture recognition (FG)},
  year={2015},
  organization={IEEE}
}

Links to the papers

Cross-domain AU detection: Domains, Learning Approaches, and Measures

AFAR: A Deep Learning Based Tool for Automated Facial Affect Recognition

Dense 3d face alignment from 2d video for real-time use

Dense 3D face alignment from 2D videos in real-time

Use AFAR GUI

Make sure run pipelineMain.m under the same path where each module's folder is. (That's the default module location. Otherwise you have to check and manually change the locations of ZFace/FETA/AUDetector directory)

License

AFAR is freely available for free non-commercial use, and may be redistributed under these conditions. Please, see the license for further details. Interested in a commercial license? Please contact Jeffrey Cohn.

Infant AFAR is freely available for free non-commercial use, and may be redistributed under these conditions. Please, see the license for further details. Interested in a commercial license? Please contact Jeffrey Cohn.