flatironinstitute / CaImAn

Computational toolbox for large scale Calcium Imaging Analysis, including movie handling, motion correction, source extraction, spike deconvolution and result visualization.
https://caiman.readthedocs.io
GNU General Public License v2.0
640 stars 370 forks source link

Problem loading my h5 file created in ImageJ #974

Closed sciencecaro closed 2 years ago

sciencecaro commented 2 years ago

For better support, please use the template below to submit your issue. When your issue gets resolved please remember to close it.

Sometimes errors while running CNMF occur during parallel processing which prevents the log to provide a meaningful error message. Please reproduce your error with setting dview=None.

If you need to upgrade CaImAn follow the instructions given in the documentation.

Error when loading my own data as '.h5' file. The '.h5' file was generated via export from an '.oir' file with ImageJ.

KeyError                                  Traceback (most recent call last)
Input In [7], in <cell line: 28>()
     13 border_nan = 'copy'      # replicate values along the boundaries
     15 mc_dict = {
     16     'fnames': fnames,
     17     'fr': frate,
   (...)
     25     'border_nan': border_nan
     26 }
---> 28 opts = params.CNMFParams(params_dict=mc_dict)

File ~\Miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\params.py:883, in CNMFParams.__init__(self, fnames, dims, dxy, border_pix, del_duplicates, low_rank_background, memory_fact, n_processes, nb_patch, p_ssub, p_tsub, remove_very_bad_comps, rf, stride, check_nan, n_pixels_per_process, k, alpha_snmf, center_psf, gSig, gSiz, init_iter, method_init, min_corr, min_pnr, gnb, normalize_init, options_local_NMF, ring_size_factor, rolling_length, rolling_sum, ssub, ssub_B, tsub, block_size_spat, num_blocks_per_run_spat, block_size_temp, num_blocks_per_run_temp, update_background_components, method_deconvolution, p, s_min, do_merge, merge_thresh, decay_time, fr, min_SNR, rval_thr, N_samples_exceptionality, batch_update_suff_stat, expected_comps, iters_shape, max_comp_update_shape, max_num_added, min_num_trial, minibatch_shape, minibatch_suff_stat, n_refit, num_times_comp_updated, simultaneously, sniper_mode, test_both, thresh_CNN_noisy, thresh_fitness_delta, thresh_fitness_raw, thresh_overlap, update_freq, update_num_comps, use_dense, use_peak_max, only_init_patch, var_name_hdf5, max_merge_area, use_corr_img, params_dict)
    844 self.motion = {
    845     'border_nan': 'copy',               # flag for allowing NaN in the boundaries
    846     'gSig_filt': None,                  # size of kernel for high pass spatial filtering in 1p data
   (...)
    864     'indices': (slice(None), slice(None))  # part of FOV to be corrected
    865 }
    867 self.ring_CNN = {
    868     'n_channels' : 2,                   # number of "ring" kernels   
    869     'use_bias' : False,                 # use bias in the convolutions
   (...)
    880     'reuse_model': False                # reuse an already trained model
    881 }
--> 883 self.change_params(params_dict)

File ~\Miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\params.py:1065, in CNMFParams.change_params(self, params_dict, verbose)
   1063     if flag:
   1064         logging.warning('No parameter {0} found!'.format(k))
-> 1065 self.check_consistency()
   1066 return self

File ~\Miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\params.py:892, in CNMFParams.check_consistency(self)
    890 self.data['last_commit'] = '-'.join(caiman.utils.utils.get_caiman_version())
    891 if self.data['dims'] is None and self.data['fnames'] is not None:
--> 892     self.data['dims'] = get_file_size(self.data['fnames'], var_name_hdf5=self.data['var_name_hdf5'])[0]
    893 if self.data['fnames'] is not None:
    894     if isinstance(self.data['fnames'], str):

File ~\Miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\utilities.py:1085, in get_file_size(file_name, var_name_hdf5)
   1083 elif isinstance(file_name, list):
   1084     if len(file_name) == 1:
-> 1085         dims, T = get_file_size(file_name[0], var_name_hdf5=var_name_hdf5)
   1086     else:
   1087         dims, T = zip(*[get_file_size(fn, var_name_hdf5=var_name_hdf5)
   1088             for fn in file_name])

File ~\Miniconda3\envs\caiman\lib\site-packages\caiman\source_extraction\cnmf\utilities.py:1025, in get_file_size(file_name, var_name_hdf5)
   1023     else:
   1024         siz = f[var_name_hdf5].shape
-> 1025 elif var_name_hdf5 in f['acquisition']:
   1026     siz = f['acquisition'][var_name_hdf5]['data'].shape
   1027 else:

File h5py\_objects.pyx:54, in h5py._objects.with_phil.wrapper()

File h5py\_objects.pyx:55, in h5py._objects.with_phil.wrapper()

File ~\Miniconda3\envs\caiman\lib\site-packages\h5py\_hl\group.py:305, in Group.__getitem__(self, name)
    303         raise ValueError("Invalid HDF5 object reference")
    304 elif isinstance(name, (bytes, str)):
--> 305     oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
    306 else:
    307     raise TypeError("Accessing a group is done with bytes or str, "
    308                     " not {}".format(type(name)))

File h5py\_objects.pyx:54, in h5py._objects.with_phil.wrapper()

File h5py\_objects.pyx:55, in h5py._objects.with_phil.wrapper()

File h5py\h5o.pyx:190, in h5py.h5o.open()

KeyError: "Unable to open object (object 'acquisition' doesn't exist)"
pgunn commented 2 years ago

Is there any chance you might be able to share one of these datafiles with me?

sciencecaro commented 2 years ago

Thank you for helping me. I send you an email with a link where you can download my file.

pgunn commented 2 years ago

This helped a lot, thanks.

Right now the best way to solve this is to set, in the parameters, var_name_hdf5 to 't0/channel0'

If you only had one key inside the hdf5 file, and it were in the root of it, you would not need to set this, but the hdf5 file you have has the following structure:

I will consider adjusting the hdf5 auto-inference logic to try a bit harder to infer structure in some future release (right now the auto-infer code neither filters out obvious metadata nor will it traverse directories), but the answer will generally be, for users of Caiman, that if it can't figure out where your data is in your dataset you'll need to tell it.

Please let me know if setting var_name_hdf5 solves the issue, but I fully expect it to.

sciencecaro commented 2 years ago

I added var_name_hdf5 = '/t0/channel0' to the mc_dict. And I had to add it again for the inputs to the Motion correction function. mc = MotionCorrect(fnames, dview=dview, var_name_hdf5=var_name_hdf5, **opts.get_group('motion')) Now it's working. Thanks a lot!