Open Vickeyhw opened 1 year ago
Technically, DOT could be adopted to optimizers with momentum. You can implement DOT-AdamW based on the released code.
@Zzzzz1 Thank you! Another question is why DDP is not adopted? I noticed that you use DP, but the DDP is more efficient. When I use DDP, the two back propagations in each iteration seem hinder the loss decrease, especially when using multiple gpus. Do you have any good ideas?
@Zzzzz1 Thank you! Another question is why DDP is not adopted? I noticed that you use DP, but the DDP is more efficient. When I use DDP, the two back propagations in each iteration seem hinder the loss decrease, especially when using multiple gpus. Do you have any good ideas?
DOT needs to maintain momentum_kd and momentum_ce at the same time. Maybe it has a conflict with DDP's implementation of updating parameters.
The implementation of DOT seems based on SGD with momentum. Since vision transformers usually use AdamW as optimizer, how about adapting the DOT to other optimizer such as AdamW or Lamb?