AmirMansurian / AttnFD

Attention-guided Feature Distillation for Semantic Segmentation
31 stars 3 forks source link

Why CBAM parameters can not be trained #2

Open thb1314 opened 4 months ago

thb1314 commented 4 months ago

After i read your paper and codes, i found that the parameters of CBAM module can not be trained, Why?

AmirMansurian commented 4 months ago

Hi

For the teacher model, the network (including its CBAM parameters) is trained. During the student's training process, all teacher parameters are frozen, and the student's parameters are trained, including its attention module. The CBAM module is defined in the Deeplab model, and like all other parameters of the network, they are updated during the backpropagation process.

These details are available in the paper's implementation details section.

Please let me know if there is any ambiguity.