princeton-nlp / CoFiPruning

[ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408
MIT License
188 stars 32 forks source link

Why use 3 optimizers during training? #17

Closed Ther-nullptr closed 2 years ago

Ther-nullptr commented 2 years ago

Hi! I want to ask why we should use 3 optimizers during training? I think self.optimizer.zero_grad() is enough.

self.optimizer.zero_grad()
if self.l0_optimizer is not None:
     self.l0_optimizer.zero_grad()
if self.lagrangian_optimizer is not None:
     self.lagrangian_optimizer.zero_grad()
xiamengzhou commented 2 years ago

These three optimizers are used to update the parameters of different model parts. self.optimizer is used to update the main model parameters; self.l0_optimizer is used to update the parameters of the L0 module and self.lagrangian_optimizer is used to update the lagrangian parameters ($\lambda_1$ and $\lambda_2$). The learning rates used for these optimizers are different.

Ther-nullptr commented 2 years ago

get it!