xzz777 / SCTNet

Official implementation of SCTNet (AAAI2024)
MIT License
146 stars 10 forks source link

More details on the Training Pipeline. #22

Open AntonioConsiglio opened 3 months ago

AntonioConsiglio commented 3 months ago

Hello everyone, I want to express my gratitude for your efforts.

I'm having trouble understanding the Training pipeline especially since you're using mmcv for the training manager. In your code, you compute the Cross-Entropy Loss (CELoss) after the SCTNet heads, and the AlignLoss which only considers the SCTNet backbone feature (x2 and x7).

https://github.com/xzz777/SCTNet/blob/d4bd6d8073a59831d69f303fc5b39c70023e2719/mmseg/models/decode_heads/vit_guidance_head.py#L30-L38 Mi first question is: are the above layers trained with the same learning rate as the head (i.e., lr*10) or just lr? If so, why is it only lr in this case?

Second question: Do you perform backpropagation individually for each loss, or do you sum the losses and then propagate the total loss?

AntonioConsiglio commented 3 months ago

@xzz777 ?