megvii-research / mdistiller

The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillation-Oriented Trainer https://openaccess.thecvf.com/content/ICCV2023/papers/Zhao_DOT_A_Distillation-Oriented_Trainer_ICCV_2023_paper.pdf
786 stars 118 forks source link

Is DOT adaptable to other optimizer? #52

Open Vickeyhw opened 10 months ago

Vickeyhw commented 10 months ago

The implementation of DOT seems based on SGD with momentum. Since vision transformers usually use AdamW as optimizer, how about adapting the DOT to other optimizer such as AdamW or Lamb?

Zzzzz1 commented 9 months ago

Technically, DOT could be adopted to optimizers with momentum. You can implement DOT-AdamW based on the released code.

Vickeyhw commented 9 months ago

@Zzzzz1 Thank you! Another question is why DDP is not adopted? I noticed that you use DP, but the DDP is more efficient. When I use DDP, the two back propagations in each iteration seem hinder the loss decrease, especially when using multiple gpus. Do you have any good ideas?

Zzzzz1 commented 9 months ago

@Zzzzz1 Thank you! Another question is why DDP is not adopted? I noticed that you use DP, but the DDP is more efficient. When I use DDP, the two back propagations in each iteration seem hinder the loss decrease, especially when using multiple gpus. Do you have any good ideas?

DOT needs to maintain momentum_kd and momentum_ce at the same time. Maybe it has a conflict with DDP's implementation of updating parameters.