MathOnco / valis

Virtual Alignment of pathoLogy Image Series
https://valis.readthedocs.io/en/latest/
MIT License
115 stars 28 forks source link

Registering Slide to Standard Space #59

Open abadgerw opened 1 year ago

abadgerw commented 1 year ago

Hope all is well! I was hoping to try and use VALIS to register my sections from different patients to a standard space so that I could perform downstream topographical analyses. An example of the standard space is represented by the attached hand-drawn paper map.

image

On each of the histological sections, I have the areas outlined in black on the paper template already outlined. Can I use this information and VALIS to warp the histological sections onto this standard space?

cdgatenbee commented 1 year ago

Hi @abadgerw, I think it might be possible to do something like this, but I'd like to try out a few ideas before offering a solution. I think I can try some things out using that template and the other images you've shared before. I'll let you know how it goes.

Best, -Chandler

abadgerw commented 1 year ago

Thanks, @cdgatenbee! I'm attaching a zip file filled with 3 .nii files (one file for the paper templates from the cervical, thoracic, and lumbar regions). Do not hesitate to let me know if you need a couple of histological slides from each level to test with. Our goal is to ultimately place stains into a standard space so we can generate a heatmap across patients of DAB signal for various stains.

Paper Maps.zip

abadgerw commented 1 year ago

@cdgatenbee I just wanted to see if you needed any additional histological slide examples for this? I can create a google drive folder to share, if necessary. Thanks again for your help!

cdgatenbee commented 1 year ago

Hi @abadgerw, Sorry, have been playing "catch up" with some other projects, so unfortunately haven't made progress on this. Planning to get back to valis related issues though, so will hopefully be able to put something together. I'm thinking the first step will be to create a custom ImageProcesser that draws the relevant registered annotation on a blank image, and then use that as the processed image to be aligned to the paper template. Those transformations can then be applied to the original images so that they all align with the paper template. Ι guess one issue will be determining a common image shape that all of the images will be aligned to, i.e. how large would the full scale "paper map" be? Once that's sorted I think this should work. I'll keep you updated.

Best, -Chandler

abadgerw commented 1 year ago

Thanks, @cdgatenbee! Happy to provide any images, answer any questions, and/or do any troubleshooting. I really appreciate your help!

abadgerw commented 1 year ago

@cdgatenbee I was thinking more about the size question you posited. We had done some work in the past with these paper templates in FSL and the sizes used I believe were 320x180 for cervical and 300x240 for both thoracic and lumbar.

abadgerw commented 11 months ago

@cdgatenbee, one extension question related to this:

Could spatial transcriptomics data also be read and overlayed onto this standard space?

cdgatenbee commented 11 months ago

Hi @abadgerw, Apologies for not making too much progress on aligning to the standardized space, but I think I should have some time next week where I can dig into this after i've pushed the next update. To answer your question though, as long as the spatial transcriptomics data has associated xy coordinates then valis should be able to overlay the data on the standard space. It would be similar to the warping points example, except you'd replace slide_obj.warp_xy with slide_obj.warp_xy_from_to, with the "to" slide being the one representing the standard space. If the data aren't coordinates like I'm assuming, let me know their format and I'll see if I can come up with something.

Best, -Chandler

abadgerw commented 11 months ago

No worries! Thanks! I do get a .csv file output from the spatial transcriptomics that has rows that correspond to each transcriptomics spot. Columns correspond to the following fields:

  1. The row pixel coordinate of the center of the spot in the full resolution image.

  2. The column pixel coordinate of the center of the spot in the full resolution image.

Can a .csv file like this rather than .geojson be used in this case?

cdgatenbee commented 11 months ago

Hi @abadgerw, Finally getting a chance to work on this :) I think made some progress on using the transferred annotations as the images to be used for alignment. I'd thought I could use some of the data you had shared previously to develop this, but have realized that annotations in those files ("dorsal column", "lcst", "acst", "inactive gml rim", etc.) are different that the paper maps (Cervical, Lumbar, Thoracic). Could you share an image set and the corresponding annotations that are the same as the paper maps?

Regarding warping the spatial transcriptomics, you should be able to use pandas to read in the csv file, get the xy columns, and then use the Slide.warp_xy_from_to to warp the points. It would be pretty similar to the example here. The example is based on cell segmentation from HALO, and loops over several files, but it sounds like in your case you can just use the pixel coordinates at the center of the spot instead of averaging coordinates. Valis does assume that the coordinates are xy (origin in the top left), so you may need to reorder your columns so that they are (col, row) before warping them.

Finally, a bit ago you said the sizes of the full scale paper maps were 320x180 for cervical and 300x240 for both thoracic and lumbar. What units would those be in?

Thanks again for you patience.

Best, -Chandler

abadgerw commented 11 months ago

Thanks, @cdgatenbee! I made a Google Drive folder with useful files: https://drive.google.com/drive/folders/1rNkq828bYAKgL3akWwR2XC6Btm057X2d?usp=sharing

  1. The .svs files are the actual slide scanned images. I have two cases (MS018 and MS115) which each have 3 levels of the spinal cord (Cervical, Thoracic, and Lumbar) so we can see how two cases would stack up on each of the level-specific paper maps.

  2. The .geojson files now have annotations with just the white matter and the grey matter edges masked out. The names of the .geojson files will correspond to the .svs files. If the file name has a C it is the cervical spinal cord and would map to the cervical map. If it has a T it is the thoracic spinal cord and would map to the thoracic map. Lastly, if it has an L it is the lumbar spinal cord and would map to the lumbar map.

  3. I provided two types of .nii/.nii.gz files. One set represents the fully scanned paper map for each level of the spinal cord (SpinalCord_Cervical.nii, SpinalCord_Thoracic.nii, and SpinalCord_Lumbar.nii). The others (SpinalCord_Cervical_Edge.nii.gz, SpinalCord_Thoracic_Edge.nii.gz, and SpinalCord_Lumbar_Edge.nii.gz) will likely be more useful and probably preferred as they remove the white background from the scanned paper map and just have the white matter edge and grey matter edge masked out. Therefore, these will correspond to what is already masked in the geojson files. The sizes of the maps (320x180 for cervical and 300x240 for both thoracic and lumbar) are in pixels from my understanding.

Do not hesitate to let me know if you have any questions or whether I can provide any other types of files.

PS: Once I have some spatial transcriptomics data, I'll try your advice and see how that works.

As always, thank you so much for all your help!

abadgerw commented 10 months ago

One question related to this:

The images and annotations I shared above currently only have the white and grey matter outlined to make life easier as these are the annotations that will be used for registration to the standard space template. However, can additional annotations or cell detections be added to sections being registered without them interfering with the registration to the standard space? The ultimate goal will be to be able to have heatmaps of cell densities and lesioned areas across many cases given that they'd all be in the same coordinate space on the paper template.

abadgerw commented 10 months ago

@cdgatenbee hope all is well! Just wanted to see if you needed any additional images, etc.

As always, thanks for all your help!

cdgatenbee commented 10 months ago

Hi @abadgerw, Apologies for the delay and lack of updates. I was able to setup a pipeline that uses the annotations for registration, but unfortunately ran into some trouble getting the registration to produce good results. It may just be a matter of selecting different feature detectors/descriptors, or maybe performing the rigid registration using a metric instead (e.g. maximizing mutual information), so I haven't lost hope. I think that once that is working, we can see if adding annotations interferes with the registration. If nothing else, it could be that the white and gray matter maps are used for registration, and then those transforms are applied to the full set of annotations to register everything to the standard space. However, we're having a big workshop this week, so unfortunately I won't be able to work on this until sometime next week. I'll keep you updated.

Best, -Chandler

abadgerw commented 10 months ago

Thanks, @cdgatenbee! No worries! Enjoy the workshop and do not hesitate to let me know if you need any more data or annotations to play with.

abadgerw commented 9 months ago

@cdgatenbee Hope you had a great Thanksgiving! I just wanted to see if you had any further luck with this?

cdgatenbee commented 9 months ago

Hi @abadgerw, I hope you had a great Thanksgiving as well! Unfortunately, while this is something I'd like to work on, between my other projects, github issues, and workshops/conferences, I just haven't been able to put in the time needed. I haven't forgotten about it though, and will try to find some time in the next week or two to work on this. Thanks again for your patience.

Best, -Chandler

abadgerw commented 9 months ago

No worries @cdgatenbee! Totally understandable. Thank you for the update. As always, do not hesitate to let me know if you need anything on my end for troubleshooting.

abadgerw commented 7 months ago

@cdgatenbee Happy New Year! I just wanted to check in and see if you had come up with any additional thoughts on how best to approach this? Happy to help in any way I can. As always, thanks for all your help!

abadgerw commented 4 months ago

@cdgatenbee hope you are doing well! I just wanted to see if you had any spare time to look at this any further? Happy to provide any more test data and or try out code in progress. As always, thanks for your help!

cdgatenbee commented 4 months ago

Hi @abadgerw, So sorry, but unfortunately I haven't been able to make much more progress on this. I thought it would have been fairly straight forward, but it's turned out to be quite a bit tricker than I'd anticipated. I was able to get to where I can use the annotations as the images to use in the registration (aligning those to the reference annotation), but they seem different enough that the rigid registration usually fails. So, I may have to rethink how to do this. The issue is that there are several other projects that I need to work on (the ones I have funding for), and this is more project/new feature than an issue. Given all of that, it's hard to say when exactly I'll be able to get back to this, even though it's still something I'd like to work on. If you don't mind my asking, does this project have a particular timeline?

Best, -Chandler

abadgerw commented 3 months ago

@cdgatenbee No worries! Totally understandable. We have actually just incurred a new delay in getting some of the data we need for this full project. We should have this data by October but are definitely flexible and can definitely revisit implementation of this once we have the data in hand and your plate is less full. As always, thank you so much for all your help and such a great resource!