jo-mueller / Slice2Volume

Script that registers Histology slices to volumetric image data
BSD 3-Clause "New" or "Revised" License
11 stars 1 forks source link

Histology Images - Inverse Transformation #27

Open saarah815 opened 1 year ago

saarah815 commented 1 year ago

Hi Johannes,

Hope you are well! I have encountered some issues with the inverse transformation unfortunately and was hoping you could please assist. I am currently aiming to analyse irradiation-injured mouse brain tissue with a view to mapping cell nuclei count to the different regions in the brain using the DSURQE mouse brain atlas.

I have been using the data located at https://rodare.hzdr.de/record/1849, which has been a valuable tool in our research.

I have been focusing on the P2A_C3H_M5 mouse identifier. I seem to be having some trouble with with the transformix-readable transform parameters, located in /Histology/Warped/trafo. I would like to use the /XXXX_Scene_Y_trafo_inverse.txt files to project the DSURQE_40micron_labels_transformed_resliced.tif file onto each of the 21 untransformed high-resolution histology images located in /Histology/Slices/result/...*.tif - specifically the DAPI-stained images. This will allow me to map each brain region onto the high-res tissue slice images and count the nuclei in each region.

Unfortunately, when running this code using the SimpleElastix Python package (https://simpleelastix.readthedocs.io/), the resulting image seems to display only a black screen as opposed to the inverse-transformed image. I have ensured that the dimensions and spacing of both images are compatible and have also tried converting the resulting image to a standard colour space and bit depth.

It would be greatly appreciated if you could please help us to overcome this issue. Please see attached a screenshot of my code for reference.

Thank you very much for your help, and I look forward to hearing from you soon!

Kind regards,

Saarah Hussain

hist_transformations_screenshot
jo-mueller commented 1 year ago

Hi @saarah815 ,

sorry for the late reply! I'm very happy to see people are using the data :) Ass for the transformations I agree that it can be a bit hard to undo/apply them. Am I understanding correctly that you want to project data from the CT frame of reference into some of the histological images?

To ease this job, I wrote a Jupyter notebook (see here), maybe that would help you? It definitely worked for the dose simulations.

One thing to look out for when transforming the atlas regions (I actually haven't done that, myself) is to change the interpolation type, potentially in the inverse transformation files to nearest neighbor interpolation (aka bspline interpolation of order zero), otherwise well, the regions of the atlas will be interpolated into floating point numbers during the transformation and will not make any sense afterwards. Let me know whether this works!

saarah815 commented 1 year ago

Hi @jo-mueller,

Apologies for the late response - I was away all of last week! Thank you very much for such detailed help - it has been a lifesaver!

Yes, that is correct. I essentially want to project the DSURQE atlas onto the DAPI-stained histological images so that I can count the nuclei in each of the different brain regions within the atlas.

The Jupyter notebook is incredibly helpful - thank you! I ran into a small issue with the slice extraction, though. I made some changes to that section which seemed to work fine until I got to the end result.

This is the part from the original code that didn't work for me:

slice_to_transform = rotated_image[:, slice_query['Assigned slice'], :]
loc_world = slice_query['Assigned slice']*scale[0]

viewer.add_image(slice_to_transform, translate=[0, loc_world , 0], colormap='cyan', blending='additive', scale=scale)
napari.utils.nbscreenshot(viewer, canvas_only=True)

It returned this error message:

ValueError: could not broadcast input array from shape (3,) into shape (2,)

I then adapted this part of the code:

slice_query_assigned_slice = ((slice_query['Assigned slice']).values)[0]

slice_to_transform = rotated_image[:, slice_query_assigned_slice, :]
loc_world = (slice_query_assigned_slice)*scale[0]

viewer.add_image(slice_to_transform, scale=[scale[2], scale[0]], translate=[0, loc_world], colormap='cyan', blending='additive')
napari.utils.nbscreenshot(viewer, canvas_only=True)

The issue with this is that although the code ran with no errors after I made those changes, I'm not entirely sure how this affected the transformation itself.

The code above is the only part of the code that I changed. I also used the 0009_Scene_1 slice, instead of 0005_Scene_1. But aside from those two things, everything else is identical to your code. I then tested the code using LET.tif just to ensure we end up with the same result. But alas:

image

image

The first image is your result, whereas the second image is my result.

And when I tried the code (i.e. the code including my small change) on the DSURQE atlas (which is my goal) instead of LET.tif:

image

I get the above, which is even worse, haha.

I think the issue may lie in the error message that I mentioned above, although I'm not too sure. If you could shed some light on why this might be occurring, that would be of immense help!

Thank you so much for all of your assistance so far - we greatly appreciate it!

saarah815 commented 1 year ago
image

Also - does it matter which .czi file is used in the code?

jo-mueller commented 1 year ago

Hi @saarah815 ,

sorry for the late reply!

Also - does it matter which .czi file is used in the code?

potentially, yes. The first step of the S2V workflow is what we refer to as "bundle registration" in the paper - all adjacent sections (e.g., the images in your screenshots) are first aligned with one another, I think I typically used the NeuN/GFAP/OSP image as target. The results of this bundle registration are in the results directory, along with the transformation parameters from native histological space into "bundle-registered space" (I hope this makes sense). The images are then co-registered with the CT/dose frame of reference.

Thus, I think it would make more sense to choose one of the images from the results directory as a target for the inverse transformation! Also, I think the raw images natively come with 20x magnification but if I remember correctly, I used a lower resolution for the S2V workflow to be able to handle the data in Fiji - which could also explain the scaling factor being off in your image data.

As a last note: The atlas was registered with the histological data solely based on its outer contour. This approximation is probably goofd enough to get a propper transformation of dose/LET into the histology frame of reference, but it may be insufficient for overlaying the atlas with the histological data. You may also want to look into using ABBA for this!

I'll try and see whether I can reproduce the error above sometimes this week.

saarah815 commented 1 year ago

Hi @jo-mueller,

Thanks so much for getting back to me. I see, yes that makes sense - thank you for clarifying.

ABBA sounds like a great shout and something I will definitely look into. Hopefully it'll work!

Thank you very much for your help and I'll keep you posted should anything arise :)

Saarah

saarah815 commented 1 year ago

Hi @jo-mueller,

Thank you very much for directing me towards ABBA - it has been a great help and is much appreciated!

Just wanted to ask a quick question about the dose output of the Monte-Carlo simulation. I want to be able to extract the dose values in Gy, and so I have been using the DoseGridScaling value from the dicom metadata for this. These are the lines of code I have been running:

rtd = dcmread(dose_filepath)
dose = rtd.pixel_array.astype(float)
dose *= rtd.DoseGridScaling

The values I've been getting, however, are all very small (please see attached screenshot for reference). It looks like this is a result of the DoseGridScaling value itself, which is approx. 5e-7 (but is different for each .dcm file from the simulation).

Since the dose values should be around 40-85 Gy, I'm wondering where I went wrong in converting pixel values to the dose values. Any clarity would be greatly appreciated!

Thanks so much,

Saarah

image

This screenshot is an array of the dicom file pixel values that have been multiplied by DoseGridScaling. For reference - I have been looking at mouse P2A_C3H_M5 and the dicom file used in that screenshot is: /Simulation/Dose/TotalDoseP2_C3H_M5_Run_0019.dcm