Justin900429 / mimicking-annotation-micro-expression-recognition

[ACM MM22] Mimicking the Annotation Process for Recognizing the Micro Expressions
30 stars 1 forks source link
deep-learning micro-expression-recognition paper-implementation pytorch

README

model

Introduction

This repo is the source code for the paper: "Mimicking the Annotation Process for Recognizing the Micro Expressions".

Installation

Preprocessing

Face cropping

# Crop the CASME II dataset
#   We use OpenFace to crop the CASME II dataset
#   Refer to: https://github.com/TadasBaltrusaitis/OpenFace 

# Crop the SAMM dataset
$ python utils/samm_crop.py \
  --ori_path <original SAMM root> \
  --new_path <new path to save the crop image>

Magnified the image frame

We use Learning-based Motion Magnification to amplify our image frame. Follow the setup and install the requirements (recommend opening a new environment and not mix up with the previous one). Put the files in magnifed to deep_motion_mag folder.

# Set up deep_motion_mag
$ <clone the deep_motion_mag and setup the environment>

# Create magnified frame for CASME II and SAMM
$ python get_casme_img.py \
  --img_root <img root for CASME II dataset> \
  --csv_file ../csv_file/CASME.csv

$ python get_samm_img.py \
  --img_root <img root for SAMM dataset> \
  --csv_file ../csv_file/SAMM.csv

Create the On-A and Off-A frames

$ python utils/crop_face_optical.py \
  --file_name csv_file/CASME.csv \
  --img_root <img root of csv file> \
  --catego casme

$ python utils/crop_face_optical.py \
  --file_name csv_file/SAMM.csv \
  --img_root <img root of csv file> \
  --catego samm  

Training

The training file is put in the train_bash folder. Users should change the image root in the bash file first.

# Training five-class and three-class CASME II and SAMM
$ bash train_bash/five_casme.sh
$ bash train_bash/five_samm.sh
$ bash train_bash/three_casme.sh
$ bash train_bash/three_samm.sh

Testing

To test our training weights, users should download the files in the Installation Section.

# Test for the five-class CASME II
$ python test.py \
  --img_root <img root of CASME II> \
  --csv_file csv_file/sep_five_casme.csv \
  --weight_root weight/five_casme_best \
  --catego casme \
  --num_classes 5

# Test for the five-class SAMM
$ python test.py \
  --img_root <img root of SAMM> \
  --csv_file csv_file/sep_five_samm.csv \
  --weight_root weight/five_samm_best \
  --catego samm \
  --num_classes 5

# Test for the three-class CASME II
$ python test.py \
  --img_root <img root of CASME II> \
  --csv_file csv_file/sep_three_casme.csv \
  --weight_root weight/three_casme_best \
  --catego casme \
  --num_classes 3

# Test for the three-class SAMM
$ python test.py \
  --img_root <img root of SAMM> \
  --csv_file csv_file/sep_three_samm.csv \
  --weight_root weight/three_samm_best \
  --catego samm \
  --num_classes 3

Our results are shown in the below table:

CASME II SAMM
Class Type Acc F1 Acc F1
Five Categories 0.8333 0.8267 0.7941 0.7582
Three Categories 0.932 0.925 0.865 0.816

Visualization

Distribution Plot

CASME II

casme distribution

SAMM

samm distribution

Citation

@inproceedings{ruan2022mimicking,
  title={Mimicking the Annotation Process for Recognizing the Micro Expressions},
  author={Ruan, Bo-Kai and Lo, Ling and Shuai, Hong-Han and Cheng, Wen-Huang},
  booktitle={ACM International Conference on Multimedia},
  year={2022}
}