ludvigla / semla

Other
47 stars 6 forks source link

x-y co-ordinates from data types #24

Open cathalgking opened 5 months ago

cathalgking commented 5 months ago

Hi. I have standalone Visium data and Mass spec data (Bruker) from the same tissue block (mouse colon). Would it be possible to use the semla package to get common x-y co-ordinates from both datasets? I would like to overlay the Mass spec data on top of the Visium data and extract all of the measurements from mass spec that would be present in a Visium spot. Is something like this possible? Or would there be another way to do this? Thanks

lfranzen commented 5 months ago

Hey @cathalgking, That's a really interesting set of data you have :) Unfortunately, we have not implemented any means to easily perform common coordinate alignment in semla, mainly because it isn't very straightforward to do. If I understood you correctly, you have different tissue sections with Visium and mass spec imaging data (but from the same biopsy), which means you'll first need to align the two sections and account for any distortion that may appear between them (like tissue tearing or stretching). There are a few approaches available for doing this from our research lab if you want to check them out: eggplant and ELD. However, the downstream integration of the datasets is not really available with these methods and would need to be done manually afterwards.

Once you have the two sections aligned somewhat, you can identify the nearest neighbours between the spatial data points of each modality in order to align them. That step can likely be done in semla but we don't have any prepared methods for doing so in a simple way at the moment. This is however something that we are currently looking into and will work on in the coming months, so hopefully we can have something available in a not-too-distant future!

cathalgking commented 4 months ago

Hi @lfranzen Thanks for your reply. I was hoping to just start with the alignment of the 2 datasets and get a set of common co-ordinates. Then later maybe look at downstream integration. Can a set of common co-ordinates be generated with any of these approaches? The ELD link does not seem to work for me if you could maybe provide another one. Thanks again.

lfranzen commented 4 months ago

Hi again @cathalgking , Both eggplant and ELD are methods to perform landmark detection across multiple tissue sections so that a common coordinate framework can be created. Apologies if the ELD link didn't work, here's another link to its documentation page: https://eld.readthedocs.io/en/latest/index.html

These methods were developed to tackle the challenging task of aligning two tissue sections to each other when the morphology doesn't match perfectly. I have not worked with the methods myself yet so I cannot give much help when it comes to the downstream processing of the data once the coordinates have been transferred.

Hope you're able to find a solution that works well for your data!

ludvigla commented 4 months ago

Just wanted to add to this conversation.

@cathalgking, we did something similar to what you describe in this paper. You might be able to find something useful in the GitHub repository for that project.

The RunAlignment can be used to align H&E images from two (or more) Visium data sets. When you align these images, the associated coordinates are also aligned. For your Mass spec data, I assume that you do not have an image to use for alignment. One way to make use of RunAlignment (although a bit hacky) would be to create an image from your Mass spec coordinates and use that for alignment.

Once you have the coordinates aligned, you then have to figure out a way to pair observations between Visium and Mass spec. I guess the two modalities have slightly different spatial resolution and "spot arrangement", so this can be quite tricky. But you can use kNN (e.g. from dbscan) to try to pair observations based on spatial proximity.

Cheers, Ludvig

cathalgking commented 4 months ago

Hi @ludvigla thanks for your input here.

I will take a read of your paper. Actually, the Mass spec data does have an image in the form of a TIFF file. Usually, all of the samples are ran through MALDI on the same slide so what I usually do is crop that TIFF file for the sample of interest and then match that up with its companion Visium H&E. Considering that, how would you suggest to proceed? Would the RunAlignment function work in this scenario?

ludvigla commented 4 months ago

If the MALDI coordinates are aligned to the MALDI image, you should be able to use RunAlignment. In other words, you need to have image coordinates for the MALDI data so that you can map the observations to the correct position on the image.

We have a tutorial on our website which goes through how to create a Staffli object. This is the object that holds image data and coordinates. If you figure out how to do this for your MALDI data, you could then create an object with mixed Visium aand MALDI data and use the RunAlignment to align the Visium data with the MALDI data. And by alignment, I mean put the "spots/pixels" in the same coordinate system. After that, you still have to figure out how to pair observations across the two modalities.