winycg / CIRKD

[CVPR-2022] Official implementations of CIRKD: Cross-Image Relational Knowledge Distillation for Semantic Segmentation and implementations on Cityscapes, ADE20K, COCO-Stuff., Pascal VOC and CamVid.
180 stars 26 forks source link

The magnitude of loss #17

Closed Debrove closed 1 year ago

Debrove commented 1 year ago

Hi! Thanks for your great work!

I integrated your code in my project and I used PSP-101 as teacher to distill PSP-18. I found the magnitude of memory_pixel_contrast_loss and memory_region_contrast_loss is small, which is from 0.002 to 0.0003 when the lambda is set to 0.1. So I'm curious about the magnitude of loss.

It's appreciated if you could share your any training logs or any other suggestions to help me. Thank you very much!

Debrove commented 1 year ago

BTW, I just applied memory_pixel_contrast_loss and memory_region_contrast_loss on baseline. The mIoU only increased 0.13.

winycg commented 1 year ago

Hi, thanks for your attention!I think you can manually tune the lambda to control the magnitude since the contribution of memory_pixel_contrast_loss and memory_region_contrast_loss may be changed over various architecture pairs. An example of training logs is shown in deeplab_mobile_resnet101_mobilenetv2_log.txt.