My tensorflow version is 2.1.0. I found that when calling step() of the learning rate scheduler, lr is not updated (the scheduler works fine when tested individually). I guess it has something to do with distributed strategy run process. The problem is fixed if moving the learning rate updating process to the main loop, instead of in training step function.
My tensorflow version is 2.1.0. I found that when calling step() of the learning rate scheduler, lr is not updated (the scheduler works fine when tested individually). I guess it has something to do with distributed strategy run process. The problem is fixed if moving the learning rate updating process to the main loop, instead of in training step function.
https://github.com/LongxingTan/Yolov5/blob/88acfd988decc4cc78335cfb6eb50f1975294c1f/yolo/train.py#L122