MouseLand / suite2p

cell detection in calcium imaging recordings
http://www.suite2p.org
GNU General Public License v3.0
361 stars 243 forks source link

FEATURE: multi-session ROI matching #933

Open rozmar opened 1 year ago

rozmar commented 1 year ago

Feature you'd like to see:

Hi there!

I'd like to start a discussion on what's the best way to extract the activity of the same cells from multiple sessions. What are the public solutions out there? What would be the best way?

The most suite2p way (that I am doing now) is to register all sessions to the same reference image, then extract ROIs in one go. The problem with this approach is if there are slight Z-shifts from day to day (or within day), vessels changing their sizes, galvo motors sometimes heating up over time, and as a result the cells are moving around.. and cells can move around quite a lot in a few weeks, that a single ROI cannot really cover. Non-rigid registration can help to some extent, but after several weeks it starts to fail more and more as the images are becoming different from session to session.

An alternative way would be to define the ROIs on the first session, the propagate them to the next session with a non-rigid warp, although this still would be sensitive to small Z-shifts, where dendrites go in and out of the plane, or the shape of the cells change more than what the nonrigid transformation can capture.

The most precise way would be detecting the cells in each session, then pairing them session to session based on their proximity and similarity. Are there well-tested codes for this purpose out there? (- I am inclined to try this, probably this is the easiest to implement.)

And somewhee in-between would be using the ROIs from a session as seeds for segmentation on the next session, or using the whole ROIs and just refining them.. this was mentioned already in #292 in a bit different context. Is this something you have been thinking of? - This could be useful even for within-session Z-shifts, if someone wants to salvage an important experiment.

Attempted alternative approaches:

I am currently finding the trials in multiple sessions that are certainly in the same plane (by correlating them to Z-stack), creating a binned movie of these trials (that span multiple sessions), concatenate it and run sparsery on it. (https://github.com/rozmar/Suite2p_pipeline/blob/0e07203da4fa8f42cf145c042414ca248fc6ac43/qc_segment.py#L179) It works, but I am not completely satisfied with the results.

Additional Context

No response

janekpeters commented 1 year ago

Dear Márton,

thanks for raising this question and providing your outlook! Good to know that we are not alone in the search. We have also not found a working method yet, and you seem to be further on the road of thinking this through than we are, but I am happy to take part in the brainstorming nonetheless. We are working on combining ROIs of adjacent recordings which were recorded in the same session, as well as combining ROIs from recordings across days.

If I understand your description right, we also tried the "most suite2P way", by simply concatenating the sessions. Additionally to what you mentioned, this posed the issue that suite2P's activity-based cell extraction does not deliver the same ROI sets across sessions, since we have found different paradigms to activate a different subset of cells. Combining sessions (especially shorter ones) therefore creates a third subset of cells, which matches neither of the sets generated by extracting from separate sessions. This is why we are currently attempting a workaround with seeding from cellpose similar to what is described in #292 - although activity-based ROI extraction would be favored to generate more accurate masks.

(Side note: In our hands, the cellpose integrated into suite2P seems to perform much worse (ie. recognizes nearly no cells) as opposed to the main distribution of cellpose supplied with a simple mean image. We have not found a reason for this just yet, but maybe this only occurs on our end?)

We attempted a simple version of your suggestion to pair cells based on their footprint: Masks were merged (ie. belonging pixels combined additively) if the distance between two ROI centers was smaller than 0.2 of the smaller ROI radius, and the two masks were overlapping >80%. It works to an extent, but the issue here is that this is indeed sensitive to motion across sessions, ie. it risks creating oval masks. Aligning the sessions first using a non-rigid warp seems to work better. Perez-Ortega et al (2021) used a similar approach (https://doi.org/10.7554/eLife.64449).

One can of course also quantify ROI similarity in shape rather than surface area, but fitting the corresponding functions may be computationally intensive and error-prone. If one can manage though, it may allow for more flexible and non-linear transformations which could capture morphology and z-shift induced ROI changes. I guess ideally one would combine shape, area and temporal information across sessions, but it may be tricky to correctly integrate and weigh these modalities depending on the factors that influence the reliability of each modality - a bayesian approach comes to mind..

I am curious to hear your and others thoughts.

Cheers, Janek

trose-neuro commented 1 year ago

check out cellreg (Sheintuch/Ziv): https://github.com/zivlab/CellReg/issues/12

rozmar commented 1 year ago

Hi @janekpeters, Thank you for thoughts and the reference, I think we are pretty much on the same page. I am thinking about something that can be applied at scale, and I think CellReg is indeed a really good idea, transcribing it to python to open up development & maintenance for the larger community would be a worthwhile effort. I continue this search in the cellreg community, maybe someone has already started doing it. Will update this thread if there is an update, but would be glad to hear more thoughts.

camille-lab commented 7 months ago

Hi all, Related to this, is there any gold standard for longitudinal imaging? Here is the procedure I was thinking of implementing:

How does that sound? I have seen a paper where they correlate a few landmark cells, and not the whole FOV. What do you guys think?

Since I am doing multiple planes, this will involve taking a stack in scanimage with the same tilt as for activity imaging. Does anyone have a script for that?

Thank you! Camille

marius10p commented 7 months ago

There is a good drift correction module in Scanimage based on suite2p which does something like this but automatically and with a nice GUI. We made this available in the free version of Scanimage, it's called MariusMotionCorrector and Estimator.

camille-lab commented 7 months ago

Thank you for your answer Marius! So you would take a z-stack on day 1 and use it as a reference for the following days? And manually move along the z-axis to the peak of the cross-correlation?