megvii-research / mdistiller

The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillation-Oriented Trainer https://openaccess.thecvf.com/content/ICCV2023/papers/Zhao_DOT_A_Distillation-Oriented_Trainer_ICCV_2023_paper.pdf
807 stars 123 forks source link

Is DOT adaptable to other optimizer? #52

Open Vickeyhw opened 1 year ago

Vickeyhw commented 1 year ago

The implementation of DOT seems based on SGD with momentum. Since vision transformers usually use AdamW as optimizer, how about adapting the DOT to other optimizer such as AdamW or Lamb?

Zzzzz1 commented 1 year ago

Technically, DOT could be adopted to optimizers with momentum. You can implement DOT-AdamW based on the released code.

Vickeyhw commented 1 year ago

@Zzzzz1 Thank you! Another question is why DDP is not adopted? I noticed that you use DP, but the DDP is more efficient. When I use DDP, the two back propagations in each iteration seem hinder the loss decrease, especially when using multiple gpus. Do you have any good ideas?

Zzzzz1 commented 12 months ago

@Zzzzz1 Thank you! Another question is why DDP is not adopted? I noticed that you use DP, but the DDP is more efficient. When I use DDP, the two back propagations in each iteration seem hinder the loss decrease, especially when using multiple gpus. Do you have any good ideas?

DOT needs to maintain momentum_kd and momentum_ce at the same time. Maybe it has a conflict with DDP's implementation of updating parameters.