HarryVolek / PyTorch_Speaker_Verification

PyTorch implementation of "Generalized End-to-End Loss for Speaker Verification" by Wan, Li et al.
BSD 3-Clause "New" or "Revised" License
575 stars 166 forks source link

Why dont change the learning rate? #14

Closed wuqiangch closed 5 years ago

wuqiangch commented 5 years ago

In you code ,the learning rate allways keeps 0.01 and does not change during the train process.

celikmustafa89 commented 4 years ago

In the paper, learning rate is adaptive, changing it "decrease it by half every 30M steps". however, code is same in every iteration. Should I add learning rate schedular? or change the optimizer which has learning_rate_decay parameter? any suggestion, comment, whatever? thank you