Closed 1292765944 closed 7 years ago
I notice that the batchnorm and scale layer all are set lr = 0.0 in models/coco/ResNet-101/rfcn_end2end/train_agnostic_ohem.prototxt. Why do not update the batch norm and scale parameters? Best!
Because it will perform better. One possible reason is that the batch_size we are using is too small to do meaningful updates.
I notice that the batchnorm and scale layer all are set lr = 0.0 in models/coco/ResNet-101/rfcn_end2end/train_agnostic_ohem.prototxt. Why do not update the batch norm and scale parameters? Best!