pedrodiamel / ferattention

FERAtt: Facial Expression Recognition with Attention Net
MIT License
75 stars 26 forks source link
deep-learning facial-expression-recognition fer pytorch

FERAtt: Facial Expression Recognition with Attention Net

License: MIT

This repository is under construction ...

Paper | arXiv

Pedro D. Marrero Fernandez1, Fidel A. Guerrero-Peña1, Tsang Ing Ren1, Alexandre Cunha2

Introduction

Pytorch implementation for FERAtt neural net. Facial Expression Recognition with Attention Net (FERAtt), is based on the dual-branch architecture and consists of four major modules: (i) an attention module $$G{att}$$ to extract the attention feature map, (ii) a feature extraction module $G{ft}$ to obtain essential features from the input image $I$, (iii) a reconstruction module $G{rec}$ to estimate a good attention image $I{att}$, and (iv) a representation module $G_{rep}$ that is responsible for the representation and classification of the facial expression image.

Prerequisites

Installation

$git clone https://github.com/pedrodiamel/pytorchvision.git
$cd pytorchvision
$python setup.py install
$pip install -r installation.txt

Docker:

docker build -f "Dockerfile" -t feratt:latest .
./run_docker.sh

Visualize result with Visdom

We now support Visdom for real-time loss visualization during training!

To use Visdom in the browser:

# First install Python server and client
pip install visdom
# Start the server (probably in a screen or tmux)
python -m visdom.server -env_path runs/visdom/
# http://localhost:8097/

How use

Step 1: Train

./train_bu3dfe.sh
./train_ck.sh

Citation

If you find this useful for your research, please cite the following paper.

@InProceedings{Fernandez_2019_CVPR_Workshops,
author = {Marrero Fernandez, Pedro D. and Guerrero Pena, Fidel A. and Ing Ren, Tsang and Cunha, Alexandre},
title = {FERAtt: Facial Expression Recognition With Attention Net},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}

Acknowledgments

Gratefully acknowledge financial support from the Brazilian government agency FACEPE.