MouseLand / suite2p

cell detection in calcium imaging recordings
http://www.suite2p.org
GNU General Public License v3.0
353 stars 241 forks source link

using suite2p for 1p data #110

Closed DenisPolygalov closed 5 years ago

DenisPolygalov commented 6 years ago

Hi,

I'm trying to use suite2p Python version for processing of 'demo_data.tif' file from this page: https://github.com/JinghaoLu/MIN1PIPE/tree/master/demo but getting result that make no sense whatsoever. The file is 1-photon data. I had read suite2p documentation and comments in the code and already tried to change parameters such as reducing ratio_neuropil to 2, reducing high_pass to 10 and increasing smooth_sigma to 6 no luck... Is suite2p is not suitable for 1P recordings or it is possible to adjust parameters in a way that produce meaningful result for the tif file mentioned above?

Regards, Denis.

marius10p commented 6 years ago

Thanks. Are you using the github version? We recently pushed an update that improves performance on 1p data. We're still working out the parameters for 1p and we don't have a lot of testing data, so thanks for link.

DenisPolygalov commented 6 years ago

Are you using the github version? yes, I am at the edge of your master branch and tracking all the changes. I'm using Python version of suite2p only. we don't have a lot of testing data Here you can get more:

https://github.com/PeyracheLab/miniscoPy/blob/1e8bbf99c1ada5d742b04e50b60181e67e9d9574/main_cnmf_e.py#L22

https://github.com/PeyracheLab/miniscoPy/blob/1e8bbf99c1ada5d742b04e50b60181e67e9d9574/main_test_motion_correction.py#L30

neurodroid commented 5 years ago

We're running into similar issues with 1-photon widefield microendoscope data. To provide some more testing data, here's a particularly challenging example (including some corrupt frames):

https://www.dropbox.com/s/4zbhueu8vsqanaw/20180919_BG0039_sensor0_0003.tif?dl=0 pw is the repository name.

We have not been able to find any parameters that would result in a usable registration. Our impression is that there are a number of immobile components in these microendoscope data, such as vignetting or tissue growth on the GRIN lens, that produce large correlations of the unshifted frames with the reference frame even when there is a lot of tissue motion. The recording software that came with the microendoscope can perform decent offline motion correction, but it's closed source (Doric Neuroscience Studio). Here's the result:

https://www.dropbox.com/s/fwhm1k1tzcepx61/20180919_BG0039_sensor0_0003_ALIGNED.tif?dl=0 pw is the repository name.

We've tried to segment this aligned data set with suite2p, but again, have not found any parameter combination that would yield usable results.

Any help or advice would be appreciated.

DenisPolygalov commented 5 years ago

@neurodroid imho currently there is no good open code for analyzing Miniscope (i.e. 1p) data available. For 2p data the Python version of suite2p is the only solution that produce acceptible result. So in our case I'm ending up writing custom solution almost from scratch by using OpenCV without any "cool" but in reality (self-sensored) stuff you can see around.

neurodroid commented 5 years ago

Some news: following a suggestion from Olivier Dupont-Therrien (Doric Lenses Inc), we've been able to sort out the motion correction issue for 1p data by applying a sequence of Gaussian & Laplacian filters before computing the cross correlations. This strategy works perfectly fine on all 1p widefield data that we've tested so far. I've submitted a pull request for sima that includes this algorithm: https://github.com/losonczylab/sima/pull/250 The pull request also contains links to some example data & motion correction results. I've tried to integrate it into suite2p as well, but I'm unsure what a good place would be to integrate the filters.

carsen-stringer commented 5 years ago

awesome! I unfortunately can't see the corrected tiffs in the pull request though without a password :/

if you could write a filt1p.py function that does the operations, we can have a user flag for the filter, and import your module and apply it in the function correlation_map

neurodroid commented 5 years ago

Dropbox password is laplace

What is the function signature that I should use for the filter? In the simplest case where you have a list of 2D frames, the filter takes this shape:

import numpy as np
from scipy.ndimage import gaussian_filter
from scipy.ndimage import laplace

def filter1p(framelist, sigma=8.0):
    return [np.abs(laplace(gaussian_filter(frame, sigma)))
        for frame in framelist]

We'd also have to apply the filter to the reference image I suppose?

carsen-stringer commented 5 years ago

thanks sorry for missing that the tiff looks good (certainly way better than what suite2p does with 1p data), was that without non-rigid registration? it looks like it could use some non-rigid correction.

We do actually filter with a gaussian in the fourier domain - the line points to where we prepare the function: https://github.com/MouseLand/suite2p/blob/9a1c69d90a6d987d553bc6958a6dc2874a416835/suite2p/register.py#L57

We use 'smooth_sigma' at 1.15 for 2P data. You can change that option in the GUI. So all that's needed is the laplacian filter, a function like that would be good, but is there any way to do it in the fourier domain for speed reasons?

and we can link all these parameters to an ops default for 1P, if you also want to turn off phase-correlation for instance

carsen-stringer commented 5 years ago

also I can add the division by 2 to make it equivalent to the gaussian

neurodroid commented 5 years ago

is there any way to do it in the fourier domain for speed reasons?

I suppose there is, but it is beyond me. See here for example: https://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm

For the implementation in scipy.ndimage.laplace, if I understand correctly they compute correlation with a kernel [1, -2, 1] for each axis individually: https://github.com/scipy/scipy/blob/v1.2.1/scipy/ndimage/filters.py#L413

Which is apparently equivalent to using one of the proposed 2D kernels from the first link: https://stackoverflow.com/questions/32768407/what-is-the-laplacian-mask-kernel-used-in-the-scipy-ndimage-filter-laplace

DenisPolygalov commented 5 years ago

FYI, this combination of Gaussian and Laplacian look like a poor's men version of so called "Edge-preserving smoothing":

https://en.wikipedia.org/wiki/Edge-preserving_smoothing https://pdfs.semanticscholar.org/d031/ea7b0b3c7a8cb4c58fd77f3fa332c5bd2b04.pdf https://arxiv.org/abs/1503.07297

Much more efficient and fast methods exist: https://arxiv.org/abs/1505.00996 http://kaiminghe.com/eccv10/

The output of such filter can be (relatively) easly corrected by OpenCV's findTransformECC() and warpAffine() methods (rigid motion only). Not guaranted to work with any possible data from any Lab however...

carsen-stringer commented 5 years ago

I see, the Laplace filter is a linear filter so we could do that in the fourier domain fine. @DenisPolygalov do you have a recommendation for an edge-preserving filter in python that's fast and pip installable? It seems that one isn't included in opencv proper.

DenisPolygalov commented 5 years ago

@carsen-stringer here it is. If you prefer functional programming just take the process_frame() method and use as a function returning the na_out numpy array.


import numpy as np
import cv2 as cv

class CFastGuidedFilter(object):
    """
    Edge-preserving spatial smoothing filter.
    https://arxiv.org/abs/1505.00996
    http://kaiminghe.com/eccv10/
    """
    def __init__(self, i_filter_sz, f_eps=0.01):
        self.i_filter_sz = np.int(i_filter_sz)
        self.f_eps = f_eps
        self.t_filter_sz = ((2 * self.i_filter_sz) + 1, (2 * self.i_filter_sz) + 1)
        # main data exchange interface for this class
        self.na_out = None

    def process_frame(self, na_input):
        na_I2 = cv.pow(na_input, 2);
        mean_I  = cv.boxFilter(na_input, -1, self.t_filter_sz)
        mean_I2 = cv.boxFilter(na_I2,    -1, self.t_filter_sz)

        cov_I = mean_I2 - cv.pow(mean_I, 2)

        na_a = cv.divide(cov_I, cov_I + self.f_eps)
        na_b = mean_I - (na_a * mean_I)

        mean_a = cv.boxFilter(na_a, -1, self.t_filter_sz)
        mean_b = cv.boxFilter(na_b, -1, self.t_filter_sz)

        self.na_out = (mean_a * na_input) + mean_b
DenisPolygalov commented 5 years ago

f2f

Above is the inter-frame distance (in pixels) for the raw data file (20180919_BG0039_sensor0_0003.tif, first 1024 frames only). Super shaky... Maybe the base plate is just not attached well to the skull? I'm wondering how and what can be used as the "reference frame to align to" in this case...

DenisPolygalov commented 5 years ago

Above is the frame number 759 from input the file ( 20180919_BG0039_sensor0_0003.tif ) and the same frame from the corrected file ( 20180919_BG0039_sensor0_0000_corr_sima_mc_lp8_final.tiff ) both loaded into ImageJ so no extra code/processing involved from my side. Doesn't look the same for me... @neurodroid Is there are any nontrivial temporal relationship between input and output? I thought all 1024 frames of the output file correspond to first 1024 frames of the input file, no?

ah, that was just wrong input file. So many different files... My bad, sorry.

carsen-stringer commented 5 years ago

so @DenisPolygalov you think this is the best solution? do you want make a pull request in which you add this function in a separate file? I don't want to copy your function and we can just import it. I like the class structure.

DenisPolygalov commented 5 years ago

@carsen-stringer I'm not sure how to answer your question properly, but I'll try... The code for the CFastGuidedFilter class written above is not a complete solution for 1p data registration. This is just a filter (as you asked above), i.e. a part of solution. The whole solution I found also includes findTransformECC(), warpAffine() and other OpenCV methods absent in numpy/scipy. Most importantly it is heavily rely on OpenCV and as far as I understand currently suite2p don't have OpenCV as dependence and I am certainly not the one who can decide to add it :) Some suite2p users might be not so happy finding extra heavy dependence. Also adding OpenCV might break other things, so it is the designer’s decision. You or anyone else are always free to re-use my code, it is GPL anyway. I opened this issue, but I was unable to find registration method that can be easily integrated into suite2p so I decide to write code from scratch. My code will be released (I hope soon) separately. So the question is how desirable it is (for other users(!)) to have 1p data processing into suite2p and/or add OpenCV as dependence. I also can write down memo with more detailed OpenCV method calling sequence I'm using for 1p frames registration, but we need someone to spend time integrating it into suite2p and do the testing… :`(

carsen-stringer commented 5 years ago

I've been playing with basic high-pass filtering and things seem to be working well. @neurodroid your tiff still isn't perfect - I'm sorry I couldn't reproduce the shift you found at that huge movement (even with the laplacian), but otherwise it looks decent. It's all on the sparse_mode branch. Here is an explanation of the parameters that I changed from default:

'spatial_taper' is how much to exclude from the edges (this is very experiment dependent)

'spatial_hp' is the window for high-pass filtering in pixels (I'd say 30-50 depending on FOV)

'pre_smooth' is how much to gaussian smooth before taking FFTs (not usually necessary)

'smooth_sigma' is how much to gaussian smooth in the FFT space (at least 2 for 1P)

'1Preg' = 1 turns on these options

'snr_thresh' is when to accept non-rigid shifts as a function of SNR, I'd recommend 1.5 for 1P

These are the parameters I found for various available online datasets:

# for cnmfe tiff
fs = ['/media/carsen/DATA1/TIFFS/demo_cnmfe/data_1p.tif']
ops={'smooth_sigma': 3, 'block_size': [64, 64], 'snr_thresh': 1.5, 
        'maxregshiftNR': 8, 'spatial_hp': 30, 'pre_smooth': 0, '1Preg': 1,
     'do_phasecorr': True, 'nonrigid': True, 'maxregshift': 0.15,
     'subpixel': 10, 'spatial_taper': 20}

# for minipipe tiff
fs = ['/media/carsen/DATA1/TIFFS/demo_minipipe/demo_data.tif']
ops={'smooth_sigma': 3, 'block_size': [64, 64], 'snr_thresh': 1.5, 
        'maxregshiftNR': 8, 'spatial_hp': 30, 'pre_smooth': 0, '1Preg': 1,
     'do_phasecorr': True, 'nonrigid': True, 'maxregshift': 0.15,
     'subpixel': 10, 'spatial_taper': 20}

# for neurodroid
fs = ['/media/carsen/DATA1/TIFFS/neurodroid/20180919_BG0039_sensor0_0000_corr.tif']
ops={'smooth_sigma': 5, 'block_size': [300,300], 'snr_thresh': 1.5, 
        'maxregshiftNR': 5, 'spatial_hp': 50, 'pre_smooth': 2, '1Preg': 1,
     'do_phasecorr': True, 'nonrigid': True, 'maxregshift': 0.15,
     'subpixel': 10, 'spatial_taper': 60}

# for miniscopy tiff
fs = ['/media/carsen/DATA1/TIFFS/demo_miniscopy/msCam1.tif']
ops={'smooth_sigma': 6, 'block_size': [200, 300], 'snr_thresh': 1.5, 
        'maxregshiftNR': 8, 'spatial_hp': 50, 'pre_smooth': 0, '1Preg': 1,
     'do_phasecorr': True, 'nonrigid': True, 'maxregshift': 0.15,
     'subpixel': 10, 'spatial_taper': 80}
carsen-stringer commented 5 years ago

also when running for cell extraction, you'll want to play with 'threshold_scaling' and 'max_overlap' if you're getting too many ROIs (increase 'threshold_scaling' if it's just too many, and decrease 'max_overlap' if ROIs are too overlapping).

carsen-stringer commented 5 years ago

this is now implemented in the main branch / pip. please try the latest version, and see the installation instructions on the readme

DenisPolygalov commented 5 years ago

Thanks. I will be even more happy if you could point me to the specific commit(s) that implement this feature: https://github.com/MouseLand/suite2p/commits/master

carsen-stringer commented 5 years ago

I did this in early March (see earlier messages) idk the exact commit on the sparse_mode branch, but it's just simple spatial high-pass filtering (see function one_photon_preprocess in register.py)

I didn't find any advantage for using a Laplacian filter and found it was really sensitive to other parameters