Okay, I spoke with the author of https://github.com/warmspringwinds/tf-image-segmentation, and he mentioned another key aspect to maximizing performance when training resnet with pretrained weights for segmentation tasks is to lock the batch normalizations so they never change while training segmentation. I'm curious if there is an easy way to do that with Keras.
Hmm, a tricky way could be setting momentum to 1 and trainable=False I think... But I don't know if there is better way if you don't change Keras source code.
Okay, I spoke with the author of https://github.com/warmspringwinds/tf-image-segmentation, and he mentioned another key aspect to maximizing performance when training resnet with pretrained weights for segmentation tasks is to lock the batch normalizations so they never change while training segmentation. I'm curious if there is an easy way to do that with Keras.