AICAN-Research / FAST-Pathology

⚡ Open-source software for deep learning-based digital pathology
BSD 2-Clause "Simplified" License
121 stars 24 forks source link

Exported Pyramidal TIFF shifted in QuPath #76

Closed cavenel closed 1 year ago

cavenel commented 1 year ago

Hi,

When exporting a segmentation from FastPathology to QuPath using the groovy script from https://github.com/andreped/NoCodeSeg/blob/main/source/importPyramidalTIFF.groovy, I get a huge shift: image

I believe this is because my images have a bounding box around the tissue, as seen in openslide:

openslide.bounds-height: '39324'
openslide.bounds-width: '31246'
openslide.bounds-x: '32752'
openslide.bounds-y: '109084'
openslide.level-count: '11'
openslide.level[0].downsample: '1'
openslide.level[0].height: '206613'
openslide.level[0].width: '91287'

QuPath only opens this bounding box (of size 31246x39324) while the generated TIFF from FastPathology is covering the whole image (91287x206613, downscaled 4 times) and doesn't preserve the bounding box info:

FastPathologyQuPath

I am working on a workarund on the QuPath side (by shifting the annotations by (-bounds.x, -bounds.y) but it would be nice to add the bounds also in the exported TIFF from FastPathology.

andreped commented 1 year ago

Which magnification level did you run the model at? See the FPL file. From what you say above, the image is downsampled 4 times. Have you remember to update this value from 2 -> 4: https://github.com/andreped/NoCodeSeg/blob/main/source/importPyramidalTIFF.groovy#L23


EDIT: But even still, there seems to be an offset. Could you first attempt to run the import script again after updating the downsample value?

cavenel commented 1 year ago

Yes, the screenshot is taken with the wrong downsample indeed, sorry. Here is with correct downsample of 4: image

For the shift, I have two workarounds working in QuPath now (thanks to https://forum.image.sc/t/level-dimensions-in-qupath-image-and-original-slide-image-differ/78842/3):

After the second fix, I get a correct segmentation (even without the shift I have in FastPathology from https://github.com/AICAN-Research/FAST-Pathology/issues/38#issuecomment-1671000561): image

andreped commented 1 year ago

Yes, now the annotation image looks to be in the correct scale but an offset is definitely there.

Glad you managed to find a solution! Predictions look great as well :]

def imageServer = getCurrentServer(); def shiftX = -imageServer.boundsX; def shiftY = -imageServer.boundsY; [...] def oldObjects = getAnnotationObjects().findAll{it.getPathClass() == getPathClass(currClassName)} def transform = java.awt.geom.AffineTransform.getTranslateInstance(shiftX, shiftY) transform.concatenate(java.awt.geom.AffineTransform.getScaleInstance(downsample, downsample)) def newObjects = oldObjects.collect {p -> PathObjectTools.transformObject(p, transform, false)}

Perhaps a good solution would be if you made a pull request to this script in the NoCodeSeg repository for the second fix? There will likely be similar issues with other image formats which have this bounding box cropping, hence, doing this by default when importing is likely the best solution.

I don't see it as critical to store the bounding box information, as this can easily be fetched from the corresponding WSI when importing to QuPath. But we could look into that in the future, if deemed necessary.