flatironinstitute / CaImAn

Computational toolbox for large scale Calcium Imaging Analysis, including movie handling, motion correction, source extraction, spike deconvolution and result visualization.
https://caiman.readthedocs.io
GNU General Public License v2.0
637 stars 370 forks source link

Strategies for reducing size/storage - uint8? #1128

Closed melkalliny closed 1 year ago

melkalliny commented 1 year ago

Your setup:

  1. Operating System (Linux, MacOS, Windows): Windows
  2. Hardware type (x86, ARM..) and RAM: x86, 48gb RAM
  3. Python Version (e.g. 3.9): 3.10.8
  4. Caiman version (e.g. 1.9.12): 1.9.15
  5. Which demo exhibits the problem (if applicable): N/A
  6. How you installed Caiman (pure conda, conda + compile, colab, ..): mamba, miniconda
  7. Details:

I've been working to process 1P data collected by the UCLA Miniscope system. My labmate Will previously opened an issue about trying to directly read the .avi files (#1095) but for now, our workaround is to caiman.load, caiman.concatenate (data is saved in individual 1000-frame clips), and save the concatenated file as .tif. To save space, I've been converting the data from float32 to uint8 (which fully represents the data) and saving them using tifffile.imwrite. It's not a great solution and I'm unable to concatenate all the clips from a 60-min session due to running out of memory. So question number 1 is whether you have any tips or suggestions for this, short of using a server. One thought is to do a spatial/temporal downsample of each clip (although I believe I only see that as an option when concatenating, not when loading).

For now, to keep testing the CaImAn pipeline, I've been using a 16-min (10gb) concatenation and using outtype='uint8' each time I caiman.load a file. But as soon as files start getting mem-mapped, the files bump back up to ~40gb. Looking into mmapping.py, it looks like this is because everything is being done as np.float32. Question number 2 is whether you think it would be a good idea for us to switch everything in mmapping.py to uint8, and if there are downstream computations we would then need to change / be concerned about.

Appreciate your all's help!

pgunn commented 1 year ago

Hello, Before we start digging into this, it'd help to clarify a point of terminology - is the main pain point storage (disk), RAM usage, or both? If it's RAM usage, memmap should mostly be an adequate solution and we should be able to help with anything that's not working right (particularly with 48G of ram, which is within our targeted system requirements).

melkalliny commented 1 year ago

RAM usage. Concatenating a 60-min session maxes out RAM and eventually fails due to inability to allocate the array.

And then for the workaround of concatenating a shorter video and putting it through mcorr, loading the corrected .mmaps to do any visualizations just takes very long, which I figure might be improved with smaller file sizes/uint8. At one point, this was also throwing an error I believe due to RAM/allocation, but I think I circumvented that by using outtype='uint8', so now it's just a time problem.

kushalkolar commented 1 year ago

Some thoughts:

  1. I think that the reason everything is done in float32 memmaps is becuase a lot of calculatation are optimized for 32bit floats, there might also be some opencv functions that are used which require foat32 arrays.
  2. Whenever reducing bit depth make sure that you aren't clipping values, the way that you would usually do this is to scale the range from the higher bit depth such that all values are now within the range of the lower bit depth. I don't think ndarray.astype(np.uint8) does this (at least the last time I tried this years ago you had to manually scale it down).

Have you tried reducing n_processes, it will then perform CNMF On fewer patches in parallel and thus reduce RAM usage.

melkalliny commented 1 year ago

Thanks Kushal. I'll double check the bit depth point, and will try to stick with the float32 memmaps, although the size issue is beginning to seem like a barrier to even getting to the CNMF.

We were able to get our full concatenated file into caiman by concatenating .avis using ffmpeg, loading using caiman.load, saving as .tif using tifffile, then loading using caiman.load.

So I loaded that~ 37gb tif file in and was able to motion correct but it took >1 hr and generated a ~140gb memmaps file. When trying to save the memmap file in order 'C', I run into: "MemoryError: Unable to allocate 144. GiB for an array with shape (107227, 600, 600) and data type float32"

Maybe there's a way to spatially and temporally downsample the .tif file as it's being loaded?

kushalkolar commented 1 year ago

Ah these seem like very long duration recordings?

Anyways, when RAM is seriously an issue, such as 140GB memmaps, you should try the online algorithm OnACID-E instead: https://github.com/flatironinstitute/CaImAn/blob/main/demos/notebooks/demo_online_cnmfE.ipynb

melkalliny commented 1 year ago

They're only around 1 hour, which doesn't seem unusually long for 1P data?

We'll try the online algorithm, but just want to be sure there's nothing we can be doing to make it happen locally.

kushalkolar commented 1 year ago

What does your FOV look like, you might be able to spatially downsample, and/or crop.

kushalkolar commented 1 year ago

BTW the online algorithm can be run locally, it's "online" in the sense that it is fast enough to run in real time because it proceses frame-by-frame instead of in patches. It's not online as in remote.

melkalliny commented 1 year ago

Sounds promising, will definitely give the online algorithm a shot. And my FOVs could definitely be downsampled, I'll give that a shot in parallel.

Also apologies as I just noticed this question about memmaps data types has been raised and answered a couple of times before.

EricThomson commented 1 year ago

At some point we should add a workflow diagram on readme.md page directing people to online algorithms if their data is beyond a certain size.

pgunn commented 1 year ago

I ordinarily wouldn't recommend this, but it may be worth enabling swap, which will give your OS more options with intelligently managing what pages of memory belong on disk versus in ram. The OS can be quite smart about these things when memmap is involved and swap usually doesn't lead to thrashing in such cases.

kushalkolar commented 1 year ago

From experience, this allow CNMF to complete (as long as the requested ram isn't too high), but it becomes very slow, even if swapping on an nvme ssd

On Fri, Jul 7, 2023, 10:51 Pat Gunn @.***> wrote:

I ordinarily wouldn't recommend this, but it may be worth enabling swap, which will give your OS more options with intelligently managing what pages of memory belong on disk versus in ram. The OS can be quite smart about these things when memmap is involved and swap usually doesn't lead to thrashing in such cases.

— Reply to this email directly, view it on GitHub https://github.com/flatironinstitute/CaImAn/issues/1128#issuecomment-1625530957, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACHXXRFVFHRDACXHV6QBB4DXPAO63ANCNFSM6AAAAAAZ6NUQII . You are receiving this because you commented.Message ID: @.***>

melkalliny commented 1 year ago

Thanks all.

The demo_online_cnmfe notebook is giving me errors even with the demo data -- but this code was last updated 3 yrs ago so not too surprising.

I'll try the offline approach again with downsampling. I'm guessing the best way to do that is to pass in resize_fact when running the first save_memmap but please let me know if you think otherwise.

And then if that doesn't work, may have to look into some memory swapping - thanks for the tip.

kushalkolar commented 1 year ago

@melkalliny

What's the traceback? I just tried it and I get this:

cnm_online = cnmf.online_cnmf.OnACID(params=online_opts, dview=dview)
cnm_online.fit_online()
      240716 [movies.py:extract_shifts():341] [2019027] Movie average is negative. Removing 1st percentile.
      240826 [movies.py:extract_shifts():341] [2019027] Movie average is negative. Removing 1st percentile.
      240921 [movies.py:extract_shifts():341] [2019027] Movie average is negative. Removing 1st percentile.
---------------------------------------------------------------------------
PicklingError                             Traceback (most recent call last)
Cell In[18], line 2
      1 cnm_online = cnmf.online_cnmf.OnACID(params=online_opts, dview=dview)
----> 2 cnm_online.fit_online()

File ~/repos/CaImAn/caiman/source_extraction/cnmf/online_cnmf.py:1186, in OnACID.fit_online(self, **kwargs)
   1184     model_LN = None
   1185 epochs = self.params.get('online', 'epochs')
-> 1186 self.initialize_online(model_LN=model_LN)
   1187 self.t_init += time()
   1188 extra_files = len(fls) - 1

File ~/repos/CaImAn/caiman/source_extraction/cnmf/online_cnmf.py:940, in OnACID.initialize_online(self, model_LN, T)
    938 if self.params.get('online', 'motion_correct'):
    939     mc = caiman.motion_correction.MotionCorrect(Y, dview=self.dview, **self.params.get_group('motion'))
--> 940     mc.motion_correct(save_movie=True)
    941     fname_new = caiman.save_memmap(mc.mmap_file, base_name='memmap_', order='C', dview=self.dview)
    942     Y = caiman.load(fname_new, is3D=self.params.get('motion', 'is3D'))

File ~/repos/CaImAn/caiman/motion_correction.py:262, in MotionCorrect.motion_correct(self, template, save_movie)
    259         b0 = np.ceil(np.maximum(np.max(np.abs(self.x_shifts_els)),
    260                             np.max(np.abs(self.y_shifts_els))))
    261 else:
--> 262     self.motion_correct_rigid(template=template, save_movie=save_movie)
    263     b0 = np.ceil(np.max(np.abs(self.shifts_rig)))
    264 self.border_to_0 = b0.astype(int)

File ~/repos/CaImAn/caiman/motion_correction.py:296, in MotionCorrect.motion_correct_rigid(self, template, save_movie)
    293 self.shifts_rig:List = []
    295 for fname_cur in self.fname:
--> 296     _fname_tot_rig, _total_template_rig, _templates_rig, _shifts_rig = motion_correct_batch_rigid(
    297         fname_cur,
    298         self.max_shifts,
    299         dview=self.dview,
    300         splits=self.splits_rig,
    301         num_splits_to_process=self.num_splits_to_process_rig,
    302         num_iter=self.niter_rig,
    303         template=self.total_template_rig,
    304         shifts_opencv=self.shifts_opencv,
    305         save_movie_rigid=save_movie,
    306         add_to_movie=-self.min_mov,
    307         nonneg_movie=self.nonneg_movie,
    308         gSig_filt=self.gSig_filt,
    309         use_cuda=self.use_cuda,
    310         border_nan=self.border_nan,
    311         var_name_hdf5=self.var_name_hdf5,
    312         is3D=self.is3D,
    313         indices=self.indices)
    314     if template is None:
    315         self.total_template_rig = _total_template_rig

File ~/repos/CaImAn/caiman/motion_correction.py:2879, in motion_correct_batch_rigid(fname, max_shifts, dview, splits, num_splits_to_process, num_iter, template, shifts_opencv, save_movie_rigid, add_to_movie, nonneg_movie, gSig_filt, subidx, use_cuda, border_nan, var_name_hdf5, is3D, indices)
   2876 else:
   2877     base_name=os.path.splitext(os.path.split(fname)[-1])[0] + '_rig_'
-> 2879 fname_tot_rig, res_rig = motion_correction_piecewise(fname, splits, strides=None, overlaps=None,
   2880                                                      add_to_movie=add_to_movie, template=old_templ, max_shifts=max_shifts, max_deviation_rigid=0,
   2881                                                      dview=dview, save_movie=save_movie, base_name=base_name, subidx = subidx,
   2882                                                      num_splits=num_splits_to_process, shifts_opencv=shifts_opencv, nonneg_movie=nonneg_movie, gSig_filt=gSig_filt,
   2883                                                      use_cuda=use_cuda, border_nan=border_nan, var_name_hdf5=var_name_hdf5, is3D=is3D,
   2884                                                      indices=indices)
   2885 if is3D:
   2886     new_templ = np.nanmedian(np.stack([r[-1] for r in res_rig]), 0)           

File ~/repos/CaImAn/caiman/motion_correction.py:3194, in motion_correction_piecewise(fname, splits, strides, overlaps, add_to_movie, template, max_shifts, max_deviation_rigid, newoverlaps, newstrides, upsample_factor_grid, order, dview, save_movie, base_name, subidx, num_splits, shifts_opencv, nonneg_movie, gSig_filt, use_cuda, border_nan, var_name_hdf5, is3D, indices)
   3192     dview.map(close_cuda_process, range(len(pars)))
   3193 elif 'multiprocessing' in str(type(dview)):
-> 3194     res = dview.map_async(tile_and_correct_wrapper, pars).get(4294967)
   3195 else:
   3196     res = dview.map_sync(tile_and_correct_wrapper, pars)

File /usr/local/lib/python3.11/multiprocessing/pool.py:774, in ApplyResult.get(self, timeout)
    772     return self._value
    773 else:
--> 774     raise self._value

File /usr/local/lib/python3.11/multiprocessing/pool.py:540, in Pool._handle_tasks(taskqueue, put, outqueue, pool, cache)
    538     break
    539 try:
--> 540     put(task)
    541 except Exception as e:
    542     job, idx = task[:2]

File /usr/local/lib/python3.11/multiprocessing/connection.py:205, in _ConnectionBase.send(self, obj)
    203 self._check_closed()
    204 self._check_writable()
--> 205 self._send_bytes(_ForkingPickler.dumps(obj))

File /usr/local/lib/python3.11/multiprocessing/reduction.py:51, in ForkingPickler.dumps(cls, obj, protocol)
     48 @classmethod
     49 def dumps(cls, obj, protocol=None):
     50     buf = io.BytesIO()
---> 51     cls(buf, protocol).dump(obj)
     52     return buf.getbuffer()

PicklingError: Can't pickle <class 'caiman.base.movies.movie'>: it's not the same object as caiman.base.movies.movie
kushalkolar commented 1 year ago

The notebook demo_realtime_cnmfE.ipynb also shows how to use OnACID-E and it seems to work. It's got 3 methods for doing it, I think the one with Ring CNN is supposed to perform the best (Johannes and Eric know better)

melkalliny commented 1 year ago

@kushalkolar - I got the below error but that was actually after trying to copy over some of what was happening in demo_realtime_cnmfe.ipynb. Don't remember the original error.

I did manage to have some success with just using the offline method after spatially and temporally downsampling, and dealing with an 18gb memmaps file instead (+ using some of the tips from #1079). The results don't look too bad, so we'll try to add some cropping and stick with the offline pipeline for now.

Coincidentally I was also able to edit and debug your cnmfe_eval notebook from Mesmerize a bit (which I know is still in the works), to get the output from this pipeline visualized there. So the last step we're going to think about is where/how to best add an accept/reject/merge interface. May ping you about that soon, but for now, I think we've hopefully found a way past the memory issue.

Screen Shot 2023-07-08 at 1 44 19 PM

InvalidParameterError Traceback (most recent call last) Cell In[16], line 2 1 cnm_online = cnmf.online_cnmf.OnACID(params=online_opts, dview=dview) ----> 2 cnm_online.fit_online()

File ~\miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\online_cnmf.py:1186, in OnACID.fit_online(self, **kwargs) 1184 model_LN = None 1185 epochs = self.params.get('online', 'epochs') -> 1186 self.initialize_online(model_LN=model_LN) 1187 self.t_init += time() 1188 extra_files = len(fls) - 1

File ~\miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\online_cnmf.py:1003, in OnACID.initialize_online(self, model_LN, T) 1001 if self.params.get('patch', 'rf') is None: 1002 cnm.dview = None -> 1003 cnm.fit(np.array(Y)) 1004 self.estimates = cnm.estimates 1006 else:

File ~\miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\cnmf.py:500, in CNMF.fit(self, images, indices) 498 if self.estimates.A is None: 499 logging.info('initializing ...') --> 500 self.initialize(Y) 502 if self.params.get('patch', 'only_init'): # only return values after initialization 503 if not (self.params.get('init', 'method_init') == 'corr_pnr' and 504 self.params.get('init', 'ring_size_factor') is not None):

File ~\miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\cnmf.py:970, in CNMF.initialize(self, Y, kwargs) 966 estim.S, estim.bl, estim.c1, estim.neurons_sn, \ 967 estim.g, estim.YrA, estim.lam, estim.W, estim.b0 = extra_1p 968 else: 969 estim.A, estim.C, estim.b, estim.f, estim.center =\ --> 970 initialize_components(Y, sn=estim.sn, options_total=self.params.to_dict(), 971 self.params.get_group('init')) 973 self.estimates = estim 975 return self

File ~\miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\initialization.py:344, in initialize_components(Y, K, gSig, gSiz, ssub, tsub, nIter, maxIter, nb, kernel, use_hals, normalize_init, img, method_init, max_iter_snmf, alpha_snmf, sigma_smooth_snmf, perc_baseline_snmf, options_local_NMF, rolling_sum, rolling_length, sn, options_total, min_corr, min_pnr, seed_method, ring_size_factor, center_psf, ssub_B, init_iter, remove_baseline, SC_kernel, SC_sigma, SC_thr, SC_normalize, SC_use_NN, SC_nnn, lambda_gnmf) 342 logging.info('Roi Initialization...') 343 if method == 'greedyroi': --> 344 Ain, Cin, , b_in, f_in = greedyROI( 345 Y_ds, nr=K, gSig=gSig, gSiz=gSiz, nIter=nIter, kernel=kernel, nb=nb, 346 rolling_sum=rolling_sum, rolling_length=rolling_length, seed_method=seed_method) 348 if use_hals: 349 logging.info('Refining Components using HALS NMF iterations')

File ~\miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\initialization.py:968, in greedyROI(Y, nr, gSig, gSiz, nIter, kernel, nb, rolling_sum, rolling_length, seed_method) 966 # model = NMF(n_components=nb, init='random', random_state=0) 967 model = NMF(n_components=nb, init='nndsvdar') --> 968 b_in = model.fit_transform(np.maximum(res, 0)).astype(np.float32) 969 fin = model.components.astype(np.float32) 971 return A, C, np.array(center, dtype='uint16'), b_in, f_in

File ~\miniconda3\envs\caiman\lib\site-packages\sklearn\utils_set_output.py:140, in _wrap_method_output..wrapped(self, X, *args, kwargs) 138 @wraps(f) 139 def wrapped(self, X, *args, *kwargs): --> 140 data_to_wrap = f(self, X, args, kwargs) 141 if isinstance(data_to_wrap, tuple): 142 # only wrap the first output for cross decomposition 143 return_tuple = ( 144 _wrap_data_with_container(method, data_to_wrap[0], X, self), 145 *data_to_wrap[1:], 146 )

File ~\miniconda3\envs\caiman\lib\site-packages\sklearn\base.py:1144, in _fit_context..decorator..wrapper(estimator, *args, *kwargs) 1139 partial_fit_and_fitted = ( 1140 fit_method.name == "partial_fit" and _is_fitted(estimator) 1141 ) 1143 if not global_skip_validation and not partial_fit_and_fitted: -> 1144 estimator._validate_params() 1146 with config_context( 1147 skip_parameter_validation=( 1148 prefer_skip_nested_validation or global_skip_validation 1149 ) 1150 ): 1151 return fit_method(estimator, args, **kwargs)

File ~\miniconda3\envs\caiman\lib\site-packages\sklearn\base.py:637, in BaseEstimator._validate_params(self) 629 def _validate_params(self): 630 """Validate types and values of constructor parameters 631 632 The expected type and values must be defined in the _parameter_constraints (...) 635 accepted constraints. 636 """ --> 637 validate_parameter_constraints( 638 self._parameter_constraints, 639 self.get_params(deep=False), 640 caller_name=self.class.name, 641 )

File ~\miniconda3\envs\caiman\lib\site-packages\sklearn\utils_param_validation.py:95, in validate_parameter_constraints(parameter_constraints, params, caller_name) 89 else: 90 constraints_str = ( 91 f"{', '.join([str(c) for c in constraints[:-1]])} or" 92 f" {constraints[-1]}" 93 ) ---> 95 raise InvalidParameterError( 96 f"The {param_name!r} parameter of {caller_name} must be" 97 f" {constraints_str}. Got {param_val!r} instead." 98 )

InvalidParameterError: The 'n_components' parameter of NMF must be an int in the range [1, inf) or None. Got 0 instead.

kushalkolar commented 1 year ago

@melkalliny

Cool that you figured out a bit of the eval nb! My plan for the accept/reject curration UI is to bind keyboard events. I also want to have the ability to color the contours based on a quality metric, we recently added this in fastplot which makes it possible! https://github.com/kushalkolar/fastplotlib/pull/241#issuecomment-1595633165

I'm busy with scipy next week but I'll get to it soon after that. If you're motived to try making it I'm happy to review a PR!