zyh-uaiaaaa / Erasing-Attention-Consistency

Official implementation of the ECCV2022 paper: Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition
77 stars 15 forks source link

Question about the reuse of view1 features for CAM computation #6

Closed nlgranger closed 1 year ago

nlgranger commented 1 year ago

Your paper was an interesting read.

It seems the model only computes the FC layer on one view and reuses that for other views to compute the corresponding attention maps.

Note that the weights used to compute attention maps come from the FC layer

Why not computing FC separately on each view?

zyh-uaiaaaa commented 1 year ago

Hello nlgranger,

We design an imbalanced framework and only compute classification loss with one stream of data, so we have one FC layer to compute attention maps.

Obviously, you can also use another FC layer to compute classification loss with the other stream of data, forming two imbalanced framework. I think it's an interesting idea to investigate the interaction with the two streams.

nlgranger commented 1 year ago

All right, thank you.