Closed cavenel closed 1 year ago
Which magnification level did you run the model at? See the FPL file. From what you say above, the image is downsampled 4 times. Have you remember to update this value from 2 -> 4: https://github.com/andreped/NoCodeSeg/blob/main/source/importPyramidalTIFF.groovy#L23
EDIT: But even still, there seems to be an offset. Could you first attempt to run the import script again after updating the downsample value?
Yes, the screenshot is taken with the wrong downsample indeed, sorry. Here is with correct downsample of 4:
For the shift, I have two workarounds working in QuPath now (thanks to https://forum.image.sc/t/level-dimensions-in-qupath-image-and-original-slide-image-differ/78842/3):
--no-crop
as an optional argument – but that will cause lots of empty padding to be added.)def imageServer = getCurrentServer();
def shiftX = -imageServer.boundsX;
def shiftY = -imageServer.boundsY;
[...]
def oldObjects = getAnnotationObjects().findAll{it.getPathClass() == getPathClass(currClassName)}
def transform = java.awt.geom.AffineTransform.getTranslateInstance(shiftX, shiftY)
transform.concatenate(java.awt.geom.AffineTransform.getScaleInstance(downsample, downsample))
def newObjects = oldObjects.collect {p -> PathObjectTools.transformObject(p, transform, false)}
After the second fix, I get a correct segmentation (even without the shift I have in FastPathology from https://github.com/AICAN-Research/FAST-Pathology/issues/38#issuecomment-1671000561):
Yes, now the annotation image looks to be in the correct scale but an offset is definitely there.
Glad you managed to find a solution! Predictions look great as well :]
def imageServer = getCurrentServer(); def shiftX = -imageServer.boundsX; def shiftY = -imageServer.boundsY; [...] def oldObjects = getAnnotationObjects().findAll{it.getPathClass() == getPathClass(currClassName)} def transform = java.awt.geom.AffineTransform.getTranslateInstance(shiftX, shiftY) transform.concatenate(java.awt.geom.AffineTransform.getScaleInstance(downsample, downsample)) def newObjects = oldObjects.collect {p -> PathObjectTools.transformObject(p, transform, false)}
Perhaps a good solution would be if you made a pull request to this script in the NoCodeSeg repository for the second fix? There will likely be similar issues with other image formats which have this bounding box cropping, hence, doing this by default when importing is likely the best solution.
I don't see it as critical to store the bounding box information, as this can easily be fetched from the corresponding WSI when importing to QuPath. But we could look into that in the future, if deemed necessary.
Hi,
When exporting a segmentation from FastPathology to QuPath using the groovy script from https://github.com/andreped/NoCodeSeg/blob/main/source/importPyramidalTIFF.groovy, I get a huge shift:
I believe this is because my images have a bounding box around the tissue, as seen in openslide:
QuPath only opens this bounding box (of size 31246x39324) while the generated TIFF from FastPathology is covering the whole image (91287x206613, downscaled 4 times) and doesn't preserve the bounding box info:
I am working on a workarund on the QuPath side (by shifting the annotations by (-bounds.x, -bounds.y) but it would be nice to add the bounds also in the exported TIFF from FastPathology.