Closed aswolinskiy closed 2 years ago
I have the same problem. Getting some weird results. At least the results looked somewhat OK after 0-1 normalization, but I assume that zero-mean normalization was performed. Also, was any stain normalization performed maybe? Would be great to have information regarding that in the actual README.
@aswolinskiy did you manage to find a setup that worked?
EDIT: I also find that preprocessing details are totally missing for the published model. In order to get satisfactory performance, it is detrimental that I know how preprocessing is expected to be performed, such as: 1) how where the intensities handled (intensity normalized, zero-mean, kept as is?), 2) where stain normalization applied?, 3) which magnification level was used? In the paper it seems like multiple magnification levels where tested, but was there only one model for all magnification levels, or one model for each (in an ensemble)? If so, the model I have available now, which magnification level is it suitable for?
Thanks!
Just added the zero-mean normalization, but I am still seeing some strange predictions. I'm testing the model on some local x40 WSIs. Seems like it only predictions two of the classes. So I assume something is wrong in preprocessing. Which magnification is suitable for this model? Should it work for all image planes x40, x20, x10, or is it better on one specific magnification level? Also, can I assume that no stain normalization was performed? From the paper it seemed like you did color augmentation, so I guess stain normalization was not used?
make sure to apply the exact normalization on #2 i'm not sure what you mean by predictions? this is a pretrained model. you cannot use it for classification directly without fine tuning. in which case, you should see reasonable outputs regardless of the preprocessing (fine tuning should quickly adjust parameters to match your normalization) if i remember correctly we pretrained on multiple resolutions, but then again, the actual results depend on your problem. fine tuning should still help no stain normalization
OK, thanks for the information.
I have tried using the model as a pure feature extractor and it seems to be performing well.
just rgb [0-1]? or the 'usual' imagenet mean/std normalization?