kaidic / LDAM-DRW

[NeurIPS 2019] Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
https://arxiv.org/pdf/1906.07413.pdf
MIT License
647 stars 116 forks source link

the learning rate in log_train times 0.1 #16

Open yibuxulong opened 3 years ago

yibuxulong commented 3 years ago

I found the lr in log_train.csv is multiplied 0.1, and I found in the line marked TODO was written like this data_time=data_time, loss=losses, top1=top1, top5=top5, lr=optimizer.param_groups[-1]['lr'] * 0.1)) # TODO also can be seen in: https://github.com/kaidic/LDAM-DRW/blame/master/cifar_train.py#L291 I wonder why lr times 0.1?