Closed hermancollin closed 2 years ago
Feel free to assign me to review
Actually get_model_native_resolution_and_patch
isn't the only problem. I need to completely remove the ONNX model from this repo because it simply doesn't work. I was kinda pissed that the model produced trash segmentations after so much time optimizing its training... then I remembered what is documented in this issue: https://github.com/ivadomed/ivadomed/issues/882
I deleted the ONNX model so that ADS takes the pytorch one instead, and the segmentation was immediately visually perfect.
Ok so I fixed it. There's no way to correct this on the ADS side because the Resample
transformation is the only way we have to get the model's native resolution. Also, this field is mentioned in the microscopy tutorial @mariehbourget wrote so any user that is comfortable enough to train a model should already have it in his config.
The solution was to add a resampling operation to the wakehealth model so that it resamples the image to the 0.226 um/px resolution (training image px size). Now it is usable.
Oh and I also entirely removed the highly frustrating ONNX model from the repo because it's useless and repeatedly caused me problems.
Description While doing some refactoring for the latest features, we missed something for model native resolution.
To Reproduce For a model without a
Resample
transformation:The
model_seg_human_axon-myelin_bf
dataset I trained (https://github.com/axondeepseg/model_seg_human_axon-myelin_bf) doesn't use resampling (not sure why, but I think I got better results with it) and since the introduction ofget_model_native_resolution_and_patch
, I hadn't tested it.I will issue a quick fix for this as I need to use it to test something asap, either by fixing the
get_model_native_resolution_and_patch
or by adding resampling to the model so that it resamples to the exact pixel size of the training data.