MarkZaidi / QuASI

List of QuPath scripts for alignment and stain deconvolution of whole-slide histology images
24 stars 5 forks source link

Error in Apply-Transforms when using .ome.tiffs #5

Open teacakedeadlift opened 1 year ago

teacakedeadlift commented 1 year ago

Hi

Great tool, thanks!

I've managed to get Calculate-Transforms to work on some ome.tiff files but when I tried to Apply-Transforms I get the following error: ERROR: Cannot invoke method readImageData() on null object in Apply-Transforms.groovy at line number 132

It seems this is due to files being ome.tiffs so that def (targetImageName, imageExt) = targetFileName.split('\\.') cuts the tiff of the end and makes imageExt "ome" not "ome.tiff" - this then makes list_of_reference_image_names incorrect.

print(list_of_reference_image_names) outputs INFO: [3206A5_ER.ome].

In Calculate-Transforms this isn't an issue as you explicitly state extensions in wsiExt =.

the easiest work-around is by renaming slides in the project namestain.tiff and setting wsiExt = .tiff. I also got it working by adding imageExt = ".ome.tiff" in Variables to set and removing from def (targetImageName) = targetFileName.split('\\.') as well as altering `refFileName = slideID + "" + refStain + imageExt`. I couldn't work out a better way of using split() to account for multiple ".".

Hope this is of use to anyone who has run into the same issue and like me is a novice coder. Now getting some fairly good faux multiplexes from my DAB sections.

Cheers Phil

teacakedeadlift commented 1 year ago

Just realised this was addressed in #2. Sorry.

Weirdly If I alter the stain vectors in QuPath using the recommended Analyze > Preprocessing > Estimate Stain Vectors it tanks alignment with a "results don't converge error" for the H&E section or if this is removed, just has a very poor alignment (despite reporting 0.999) for remaining DABs.

My ome.tiffs are outputs from VALIS so already aligned well but this gets worse even when AutoAlignPixelSize = 10. Not sure why.

MarkZaidi commented 1 year ago

It seems this is due to files being ome.tiffs so that def (targetImageName, imageExt) = targetFileName.split('\.') cuts the tiff of the end and makes imageExt "ome" not "ome.tiff" - this then makes list_of_reference_image_names incorrect.

Yeah, I've been meaning to fix this for quite some time now, especially with the rise in popularity of multiplexed and pseudo-multiplexed imaging. Until I can figure out a way to do the equivalent of an rsplit in groovy, or play around with some Regex, the quick fix is to just rename the project entries themselves in QuPath, which is also what I had to do to address issue #4 https://www.youtube.com/watch?v=30-Be-4wDn4

Weirdly If I alter the stain vectors in QuPath using the recommended Analyze > Preprocessing > Estimate Stain Vectors it tanks alignment with a "results don't converge error" for the H&E section or if this is removed, just has a very poor alignment (despite reporting 0.999) for remaining DABs.

Huh, that's quite odd, I wonder why changing the default color vectors would impact alignment so substantially that it fails. The experimental feature use_single_channel in calculate_transforms.groovy is a bit buggy, especially when using FP32bit deconvolved channels, so I recommend just using the original RGB. You should still be able to generate the deconvolved channels in apply_transforms.groovy, as shown here: https://youtu.be/EvvSsXExYOI

My ome.tiffs are outputs from VALIS so already aligned well but this gets worse even when AutoAlignPixelSize = 10. Not sure why.

There's a bunch of things that can impair alignment. Tissue processing artifacts that span across the image (folds, tears) can cause an alignment to not converge, if the artifact considerably shifts a portion of the tissue. At that point, you might want to look into deformable alignment methods. AutoAlignPixelSize is a downsample factor applied to both images to give a more "global" alignment, at the cost of precision. So while you'll have a high alignment score, that may not necessarily translate to a visually-acceptable alignment. I play around a bit with iterative aligment at different AutoAlignPixelSize values here: https://www.youtube.com/watch?v=30-Be-4wDn4

teacakedeadlift commented 1 year ago

Thanks for speedy reply.

Yeah, I've been meaning to fix this for quite some time now, especially with the rise in popularity of multiplexed and pseudo-multiplexed imaging. Until I can figure out a way to do the equivalent of an rsplit in groovy, or play around with some Regex, the quick fix is to just rename the project entries themselves in QuPath, which is also what I had to do to address issue https://github.com/MarkZaidi/QuASI/issues/4 https://www.youtube.com/watch?v=30-Be-4wDn4

Would def (targetImageName, imageExt1, imageExt2) = targetFileName.split('\\.') imageExt = imageExt1 + "." + imageExt2 work given it seems to just drop the .tiff bit? Might give it a go and see what happens.

Huh, that's quite odd, I wonder why changing the default color vectors would impact alignment so substantially that it fails. The experimental feature use_single_channel in calculate_transforms.groovy is a bit buggy, especially when using FP32bit deconvolved channels, so I recommend just using the original RGB. You should still be able to generate the deconvolved channels in apply_transforms.groovy, as shown here: https://youtu.be/EvvSsXExYOI

I set vectors for each slide separately so maybe this caused the issue? Perhaps I need to apply one across all images and only use DAB slides (no HE). Using single channels failed so I just used RGB (may be related to the above). However I now get the following warning:

INFO: Color deconvolution stains: Hematoxylin: 0.651 0.701 0.29, DAB: 0.269 0.568 0.778, Residual: 0.633 -0.713 0.302 WARN: Arbitrary transform cannot be decomposed! I will use the default pixel calibration.

Not sure what this means in real terms.

There's a bunch of things that can impair alignment. Tissue processing artifacts that span across the image (folds, tears) can cause an alignment to not converge, if the artifact considerably shifts a portion of the tissue. At that point, you might want to look into deformable alignment methods. AutoAlignPixelSize is a downsample factor applied to both images to give a more "global" alignment, at the cost of precision. So while you'll have a high alignment score, that may not necessarily translate to a visually-acceptable alignment. I play around a bit with iterative alignment at different AutoAlignPixelSize values here: https://www.youtube.com/watch?v=30-Be-4wDn4

VALIS uses a non-rigid alignment and then warps slides which is why I was using it as some of my slides have missing tissue/folds/stretches on them (more than I'd like!). Plus I can do it on a cluster as my 2017 Mac only has 8Gb RAM. But this should then mean QUASI has the warped slides to align but it might be getting confused with some of the missing bits of tissue?

What will the real terms effect of downsampling on the output file be? I tried 1 just to see and the estimated output file was 1.5 Tb so altered it to 10 and it's now 15 Gb which is more manageable but I don't want to lose too much detail. One option is to only sample a small area of the slide but is there a way I could do this via an annotation e.g. draw a rectangle and write out this are to file but with no downsampling? Or would I have to somehow define a tile?

Thanks