Closed loftusa closed 1 year ago
First thing that comes to mind, did you try increasing "nb", number of background components and run on the full movie?
On Mon, Jun 26, 2023, 19:23 Alex Loftus @.***> wrote:
I am trying to run the pipeline separately for each of five recording periods, and have been running into a lot of trouble trying to hack things together.
My goals are:
- motion detection and ROI detection should happen a single time for a particular TIF stack
- Then, the TIF stack should be split into five parts. Deconvolution should happen separately on each of the five parts.
My current setup is:
- Load a TIF stack, set up parameters with p=0, and run a modified version of CNMF.fit_file with motion correction enabled and include_eval on. This modified version also runs threshold_spatial_components. Save an HDF5 file.
- Start a loop over slice objects indexing the recording periods I need. Within the loop, load a new CNMF object from the hdf5; then, for each array in CNMF.estimates which has a length equal to the number of frames, index that array to only include the frames I need (so, index C, f, R, YrA, S, and F_dff)
- Still within the loop, run deconvolve with p=2 on each object. Save each mini-CNMF object into a list.
- Load a new CNMF object from hdf5, stack all new C, f, R, YrA, S, and F_dff together, and save as a single final object
I am doing it this way because we have changes in background baseline and other things during each recording period, which was interfering with deconvolution when we tried running the whole thing as one.
I just wanted a sanity check here -- is this the way you guys would do it? If not, is there an easier way to do this that I'm missing?
Thanks!
- A
— Reply to this email directly, view it on GitHub https://github.com/flatironinstitute/CaImAn/issues/1121, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACHXXRE5UMFSIL4BVLBH75TXNIKYZANCNFSM6AAAAAAZUZ6E5Q . You are receiving this because you are subscribed to this thread.Message ID: @.***>
@kushalkolar Yep -- doesn't look great
So basically you are running ΔF/Fo in chunks and deconvolution in chunks due to the shifting baseline? Is the baseline going down? I wonder if there's a way to improve the detrending, it sounds like that's the issue?
@kushalkolar We turn the microscope off for times ranging from 5 minutes to 1 hr, then turn it back on, for separate recording sessions. All recording sessions are in the same tif stack. So we get, for instance, big jumps in fluorescence at exactly the moment the microscope turns on, due to shifting baseline. Caiman picks this up as spikes.
Ah, this is pretty common. What if you remove a short period in the beginning of each sub-session so that you don't include the segments with large fluorescence jumps?
So, current results. Really doesn't look great.
This is with:
{'ITER': 2,
'bas_nonneg': False,
'block_size_temp': 5000,
'fudge_factor': 0.96,
'lags': 5,
'optimize_g': False,
'memory_efficient': False,
'method_deconvolution': 'oasis',
'nb': 1,
'noise_method': 'mean',
'noise_range': array([0.25, 0.5 ]),
'num_blocks_per_run_temp': 20,
'p': 2,
's_min': None,
'solvers': array([b'ECOS', b'SCS'], dtype='|S4'),
'verbosity': False}
I'm open to suggestions. I'm still wondering if there's something obvious I should change.
Just some basic questions: are you still running cnmfe, or have you switched to cnmf?
@EricThomson I switched to cnmf.
I thought so, I just wanted to make sure.
I am not a huge fan of running initially with p=0, as simultaneous deconvolution is one of the strengths of CNMF, and can help the solutions (even for C) behave better. I would suggest not waiting to run deconvolve as a separate step.
Also, it is really hard to judge deconvolution outputs as opposed to C, which is really easy to judge just using the eyeball test. The first thing I always look at is C, and does it basically seem reasonable using the eyeball test: does the signal extracted basically match the calcium signal in the movie? Do the traces match the signals going up and down in the extracted spatial footprint? Just pick one of the epochs and do this, make sure things seem reasonable. If they do, then that suggests one route, if they do not, then there is another route we should take.
Edit: For this eyeball test I would use either the opencv while inspecting the component viewer, or better yet mesmerize-vis/fastplotlib. The component viewer alone isn't enough to get a sense for the dynamic match.
Closing due to lack of activity.
I am trying to run the pipeline separately for each of five recording periods, and have been running into a lot of trouble trying to hack things together.
My goals are:
My current setup is:
CNMF.fit_file
with motion correction enabled andinclude_eval
on. This modified version also runsthreshold_spatial_components
. Save an HDF5 file.slice
objects indexing the recording periods I need. Within the loop, load a newCNMF
object from the hdf5; then, for each array inCNMF.estimates
which has a length equal to the number of frames, index that array to only include the frames I need (so, indexC
,f
,R
,YrA
,S
, andF_dff
)deconvolve
withp=2
on each object. Save each mini-CNMF object into a list.CNMF
object from hdf5, stack all newC
,f
,R
,YrA
,S
, andF_dff
together, and save as a single final objectI am doing it this way because we have changes in background baseline and other things during each recording period, which was interfering with deconvolution when we tried running the whole thing as one.
I just wanted a sanity check here -- is this the way you guys would do it? If not, is there an easier way to do this that I'm missing?
Thanks!