MouseLand / suite2p

cell detection in calcium imaging recordings
http://www.suite2p.org
GNU General Public License v3.0
348 stars 240 forks source link

ROI masks generated by Cellpose powered anatomical_only mode in Suite2p are very different from those generated by native Cellpose #1031

Open dnsdnsdnsdns opened 1 year ago

dnsdnsdnsdns commented 1 year ago

This issue was briefly mentioned in #933 #292 but I just wanted to bump it again.

Essentially I am trying to detect neurons and their activities in gcamp calcium imaging experiments. The registration aspect of Suite2p was perfect and superior to every other motion correction options I tried. However, functional segmentation wouldn't and didn't work well in my case because I need to detect all the cells, including those that never fire throughout the entire time series. The imaging conditions were set up such that the gcamp baseline fluorescence was on the lower end to improve signal to noise ratio and a typical recording only consists of less than 20 time points. So for these silent cells there isn't much information to do temporal correlation and in turn functional segmentation.

To overcome this, before becoming aware of any automated approaches in cell segmentation, I usually take a high resolution, overexposed snapshot of the field at baseline. That way I can more clearly see the cell boundaries and draw ROIs manually. Then I came across Cellpose and tried it out with those overexposed baseline images. It already worked really well with pretrained models and only required minimal manual adjustments, so I was excited to see that Cellpose anatomical segmentation already integrated into Suite2p.

But when I tried it in Suite2p I could not get the same ROI masks as those obtained through the native Cellpose. I first realized there was a 50 frames requirement no matter which ROI detection method I choose but was able to get around that as I usually have four separate evoked stimulus experiments conducted on the same field of view so it was a total of around 80 frames. In addition to those frames I included an overexposed baseline. I played around with the parameters shown in the GUI and tried to match them with the native Cellpose settings I used to no avail. The Suite2p anatomical only mode always gives me fewer ROIs and surprisingly, the fewest when I attempt to find masks on "maximum projection image," in which I would assume is closest to running the overexposed baseline on native Cellpose. In fact, the most ROIs I get is with "max projection image divided by mean image," which is somewhat functional in a sense.

The next thing I tried is duplicate 50 of those overexposed baseline images and run Suite2p on it. I again selected anatomical only for ROI detection. It performed registration successfully but could not find any ROIs. This makes me think that there is still a functional component in this "anatomical only" cell detection mode and is different from the native Cellpose. Obviously there are many more parameters not shown in the GUI and I wonder if they are any different in their default values. Maybe the pretrained model it's grabbing is not updated I don't know. Or maybe I'm not understanding the underlying concept correctly.

For me the manual workaround is first register the experimental frames together with the overexposed baseline frame with Suite2p. Then run native Cellpose on the registered baseline frame. Finally use the ROI masks on the experimental frames to get the fluorescent traces in something like Fiji. There are obviously coding solutions to this such as those mentioned in #292 but I don't know how to utilize them correctly. @neurochatter I just wanted to kindly ask if the script you provided may be useful in my case and if so how I could actually run it (for a coding beginner), thank you so much.

neurochatter commented 1 year ago

Yes, I believe my script should be very applicable to this problem! We have some differences in our input data, but hopefully it should be possible to modify my workflow to use your data. The main difference between our problems right now is that you want to use a separate overexposed reference image to extract footprints from Cellpose. In my original script, I use an output from an initial run of Suite2p as the reference image I feed to Cellpose.

This difference should be easy to work around, but one important thing to consider is registration and frame size. Since I am using an output from Suite2p's motion corrected video of my session, I know this reference frame is the same size and is registered correctly to all other frames in my video. Thus the footprints I extract in Cellpose based on the reference will align well to the rest of the video. As I understand it, your baseline image is acquired separately from your calcium video. So we need to make sure the baseline image is aligned correctly to the video.

You mentioned that you already register the experimental frames together with the baseline frame using Suite2p. If this is the case, it should be easy to extract the overexposed baseline out of this registered Suite2p output, feed it to Cellpose, and then give the resulting segmented image back to Suite2p. Could you give me a bit of detail on how you get Suite2p to register the baseline frame with the experimental frames? Do you combine the raw videos in Fiji or something, and then feed this stack to Suite2p?

If you're able to provide some example data, I can quickly take a look and see what the easiest way to adapt the script from #292 is. If you have a registered output from Suite2p containing the experimental frames and the baseline frame, this would be a good start. Alternately, raw data could work (a short experimental video plus the reference frame). Let me know if you're comfortable sharing this! If not, we can work out another solution.