LeelaChessZero / lczero-training

For code etc relating to the network training process.
143 stars 119 forks source link

Use a variable for active_lr to fix bug with multi-gpu #189

Closed Tilps closed 2 years ago

Tilps commented 2 years ago

Seems multi-gpu for some reason caches the result of the optimizer's LR requesting function across steps. This means the first LR retrieved gets used for all future steps. Changing active_lr to be a variable causes things to be processed correctly.