av-savchenko / face-emotion-recognition

Efficient face emotion recognition in photos and videos
Apache License 2.0
654 stars 124 forks source link

Valence and arousal #24

Closed AmaiaBiomedicalEngineer closed 1 year ago

AmaiaBiomedicalEngineer commented 1 year ago

Hello again! I've read your paper and I've seen that you use the circumplex model's variables arousal and valence. How do those variable appears in the code? I can't find them :( Thank you, Amaia

av-savchenko commented 1 year ago

Hello! Please, take a look at the section "Multi-task: FER+Valence-Arousal" of train_emotions-pytorch.ipynb. Starting from line PATH='../models/affectnet_emotions/enet_b0_8_va_mtl.pt', you could load the model and run it. First 8 outputs correspond to logits for facial expressions, and the last two outputs stand for valence and arousal. The metrics on validation set of AffectNet are computed in the last two lines of this section right before Example usage. BTW, in this example, you could see the predicted valence and arousal in the titles of photos of my children. But I should say that my estimates of valence and arousal are not the best-of-the-best, I just used them for multi-task learning and improvement of facial embeddings learned by the model.

av-savchenko commented 1 year ago

Closing due to inactivity