yoxu515 / aot-benchmark

An efficient modular implementation of Associating Objects with Transformers for Video Object Segmentation in PyTorch
BSD 3-Clause "New" or "Revised" License
600 stars 108 forks source link

AUX loss WEIGHT and RATIO #72

Open bhack opened 9 months ago

bhack commented 9 months ago

Can you clarify a bit the role in the default config of:

self.TRAIN_AUX_LOSS_WEIGHT = 1.0
self.TRAIN_AUX_LOSS_RATIO = 1.0
z-x-yang commented 9 months ago

aux_loss is the segmentation loss on the reference frame. Weight is the loss weight. Ratio is when to stop the aux_loss during training. Ratio=1 denotes it will not be stopped.在 2023年12月5日,20:35,bhack @.***> 写道: Can you clarify a bit the role in the default config of: self.TRAIN_AUX_LOSS_WEIGHT = 1.0 self.TRAIN_AUX_LOSS_RATIO = 1.0

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>

bhack commented 9 months ago

Ratio=1 denotes it will not be stopped.

But I see aux_weight = self.aux_weight * max(self.aux_step - step, 0.) / self.aux_step

And self.aux_step = cfg.TRAIN_TOTAL_STEPS * cfg.TRAIN_AUX_LOSS_RATIO + 1e-5

so also at Ratio 1 it is going to change the weight over time right?

What policy it is suggested for finetuning?

bhack commented 9 months ago

I am asking this cause finetuning your full checkpoints I always see a bump in the loss/MIOU at cfg.TRAIN_TOTAL_STEPS/2