AICAN-Research / FAST-Pathology

⚡ Open-source software for deep learning-based digital pathology
BSD 2-Clause "Simplified" License
121 stars 24 forks source link

Scaling issue in ndpi format from an older Hamamatsu scanner #105

Closed deniesen closed 1 week ago

deniesen commented 2 weeks ago

I've been trying to implement NoCodeSeg workflow and keep running into a scaling issue I can't understand. When I apply the "Epithelium segmentation in colonic mucosa" pipeline, or another model I've trained myself, the segmentation appears shifted, very similar to #38. However, it doesn't happen with "Tissue segmentation" pipeline.

Epithelium segmentation image Tissue segmentation for comparison image

I've narrowed it down a little by trial and error. First, it's specific to images I have. They're in ndpi format, but the slidescanner is an older model (Hamamatsu C9600-12, acquiring with NDP.scan 3.3.3). So, I'd be more than happy to share an image to reproduce the issue.

Also, I "optimized" the downsampling factor when importing the segmentation into QuPath. For an image taken with 40x objective, a factor of 4.4055 gives an exact-to-the-eye overlap. For 20x, 2.20275 looks the same.

I found another lead with some help from our local image analysis specialist. Setting "Attribute patch-magnification" and "Attribute patch-overlap" to 0 instead of their defaults 10 and 0.05 respectively in Epithelium segmentation pipeline eliminates the shift. Moreover, using the default parameters warns that "Patch size must be a multiple of 16 (TIFF limitation). Adding some overlap (32, 32) to fix.". Unfortunately I need to work with patch-magnification 10 to be able to use Epithelium segmentation model since the model wasn't predicting accurately at 0.

image

I'd be okay with using my little workaround with the odd downsampling factor for images obtained on that scanner, but I really would like to find an explanation about where it's coming from.

andreped commented 2 weeks ago

@sahpet Perhaps you could give some suggestions :]

SahPet commented 2 weeks ago

I've run into the same issue myself for some scans. It's the scans that have some wrong alignment values for some of the pyramidal layers I think. Anyway, unfortunately, I've not found any way around it other than scanning those images again. I think it's inherent in the pyramidal structure and hard to fix.

Also, remember to predict on the exact same downsample level you have trained your network on. The IBDColEpi dataset is trained on downsample 4 (10x magnification, downsample 4 from 40x). Hence if you predict on new images, those must be exported as downsample 4 as well.

deniesen commented 1 week ago

Thanks for the fast reply! Unfortunately some of the images I have are from already published projects and the slides are probably not around anymore. Would you say importing the prediction into QuPath with the odd downsampling factor is an acceptable workaround? I'm planning to use the segmentations to measure area.

Imported to QuPath with downsampling factor 4.4055 image

SahPet commented 1 week ago

Yes, that's an ok hack I think:) Just check your imports, because in my experience this is not the case for all WSIs in the same dataset necessarily.
Also, another problem is that when I ran into this, it was one of the pyramidal layers (e.g. level 2 "10x") that was not aligned with the other layers (e.g. level 0 (40x) and level 3 (5x)). You can see this by scrolling in and out in QuPath and see if the annotations shift at different resolutions. If that's the problem, then exporting again on a different level (e.g. level 0 "40x") will again result in unaligned annotation masks. Hence, there are quite a few pitfalls with this.

deniesen commented 1 week ago

It's been consistent for me so far, but that's great to know before I stumble upon a different alignment and get confused. I haven't seen a shift at different levels yet either, but occasionally, even though I have a 10x magnification level in the file, I get a warning in command window that says "Requested magnification level does not exist in image pyramid. Will now try to sample from a lower level and resize. This may increase runtime." I'll pay extra attention to those images , but I haven't seen any consequences yet.

andreped commented 1 week ago

It's been consistent for me so far, but that's great to know before I stumble upon a different alignment and get confused. I haven't seen a shift at different levels yet either, but occasionally, even though I have a 10x magnification level in the file, I get a warning in command window that says "Requested magnification level does not exist in image pyramid. Will now try to sample from a lower level and resize. This may increase runtime." I'll pay extra attention to those images , but I haven't seen any consequences yet.

In theory this should not impact quality, but will degrade inference runtime. Then again, the resizing techniques used in FAST may differ from the one used for the original WSIs when constructing the pyramid, so in practice I would expect a minor quality degradation.

Let us know if you experience any issues, which you think we might be able to do something with :] Anyways, best of luck!