Open asmagen opened 4 years ago
The readme lists all of the architectures and all of the encoders. None of them are ready for use on histopathology images right out of the box because the pre-trained weights are for ImageNet. You'll need to either fine-tune a model with those weights, or train from scratch.
Changing the input/output to anything greater than 320x320 is shown in the docs:
model = sm.Unet(BACKBONE, encoder_weights='imagenet', input_shape = (512, 512, 3)
# I personally would chose (None, None, 3) for the input shape
If your computer can't run anything larger than 320x320 because it crashes, that's a memory issue and means you need to get a different GPU that can allocate more memory, or just use what it can handle. If its not a memory issue, then you should show the error here otherwise no one will know what you mean by it 'crashing'.
So the model definition in the example is not using any pretrained network (model = sm.Unet(BACKBONE, classes=n_classes, activation=activation)
) and basically means I'mm training from scratch? And I just need to add encoder_weights='imagenet'
to have it train starting with the imagenet-based weights which will further be optimized to my problem during training? I consulted with fellow pathology image analysis groups and they confirmed they also use imagenet because apparently there isn't a pathology-specific pretrained network.
"So the model definition in the example is not using any pretrained network (model = sm.Unet(BACKBONE, classes=n_classes, activation=activation)) and basically means I'mm training from scratch? And I just need to add encoder_weights='imagenet' to have it train starting with the imagenet-based weights which will further be optimized to my problem during training?"
Yes, and if you use one of the efficientnet encoders, look at using noisy-student
weights, which come from the paper Self-training with Noisy Student improves ImageNet classification
How do I extend the image input/output to anything greater than 320x320 which is currently used in the multi class segmentation tutorial in this package? When I simply changed the input size to 512 the run crashed, so I figured that the network should be configured to a different size before. How do I do that? And how is that related to the pretrained model? If I'm using the pretrained model in the multi class segmentation tutorial then do I have to use 320x320 sized images? What other pretrained models are available here, especially for pathology applications?
Thanks