sucv / ABAW3

We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.
18 stars 4 forks source link

During my PhD I struggled for finding an end-to-end working pipeline for my emotion recognition project. I was new to deep learning, I didn't have seniors to follow, and the ER community is not as popular/open as others. Then a good soul shared his code and models with me. I was salvaged and survived my PhD. I hope the code and model state here can be helpful for those lost souls.

Conda environment

conda create --name abaw2 pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
pip install tqdm matplotlib scipy pandas

Code for preprocessing

url

Model state dict

url

Specify the settings

In main.py:

Run the code

Usually, with all the default settings in main.py being correctly set, all u need to type in is like below.

python main.py -folds_to_run 0 -emotion "valence" -stamp "cv"

Of course, if you have more machines available, u can run one fold on each machine.

Note that one single fold may take 1-2 days. So the following command may take 5 days to complete:

python main.py -folds_to_run 0 1 2 -emotion "valence" -stamp "cv"

Sometimes, the running is stopped falsely. To continue with the latest epoch, add -resume 1 to the last command you were running like:

python main.py -folds_to_run 0 -emotion "valence" -stamp "cv" -resume 1

Collect the result

The results will be saved in your specified -save_path, which include: