zyh-uaiaaaa / Erasing-Attention-Consistency

Official implementation of the ECCV2022 paper: Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition
77 stars 15 forks source link

About the pre-trained models #10

Closed balabala-h closed 1 year ago

balabala-h commented 1 year ago

HI, it seems that the pre-trained models are not available now. And I have tried to train this model in RAF-DB without pre-trained models, I got an unsatisfactory result. Would you please provide the results without pre-training

zyh-uaiaaaa commented 1 year ago

You can find the pretrained model here: https://drive.google.com/file/d/1yQRdhSnlocOsZA4uT_8VO0-ZeLXF4gKd/view?usp=sharing

Yes, pretrain is important for the performance, as the attention consistency module needs a relatively strong backbone to effectively calculate the attention map. Without pertraining, the performance on RAF-DB with 10\%, 20\%, 30\% noise is around 73.36\%, 71.21\%, 68.20\%.