nanophotonics / nplab

Core functions and instrument scripts for the Nanophotonics lab experimental scripts
GNU General Public License v3.0
38 stars 15 forks source link

HDF5 append_dataset error #127

Closed dk515 closed 3 years ago

dk515 commented 3 years ago

My first ever issue report, please bear with me and let me know if I'm not doing this right. In my experiment code I am using nplab.datafile.append_dataset to save some data, but it is now throwing an exception. Code to reproduce:

import nplab
import numpy as np
datafile = nplab.current_datafile()
datafile.create_dataset('myData1', data=np.array([1,2,3])) # this works ok
datafile.append_dataset('myData2', np.array([1,2,3])) # throws exception
datafile.append_dataset('myData1', np.array(4)) # throws exception
traceback ``` Traceback (most recent call last): File "C:\Users\...\datafile_test.py", line 13, in datafile.append_dataset('myData2', np.array([1,2,3])) # throws exception File "C:\Users\...\GitHub\nplab\nplab\datafile.py", line 290, in append_dataset maxshape=maxshape, chunks=True) File "C:\Users\...\GitHub\nplab\nplab\datafile.py", line 252, in require_dataset *args, **kwargs) File "C:\Users\...\GitHub\nplab\nplab\datafile.py", line 234, in create_dataset dset = super(Group, self).create_dataset(name, shape, dtype, data, *args, **kwargs) File "C:\Users\...\Anaconda3\lib\site-packages\h5py\_hl\group.py", line 116, in create_dataset dsid = dataset.make_new_dset(self, shape, dtype, data, **kwds) File "C:\Users\...\Anaconda3\lib\site-packages\h5py\_hl\dataset.py", line 120, in make_new_dset shuffle, fletcher32, maxshape, scaleoffset) File "C:\Users\...\Anaconda3\lib\site-packages\h5py\_hl\filters.py", line 83, in generate_dcpl raise TypeError("Scalar datasets don't support chunk/filter options") TypeError: Scalar datasets don't support chunk/filter options ```

Any ideas? It was working fine a couple of months ago. h5py version is 2.8, nplab repo is up to date.

dk515 commented 3 years ago

I branched off and started rolling back. The problem appears at commit fc65cb1 and is caused by a change in argument order in function Group.create_dataset in datafile.py, which is called by the function append_dataset that I was using. Setting the order back to what it was before resolves the problem.

I think we should roll back and keep the argument order as it was originally before fc65cb1 since the current order breaks my experiment, and it would also be consistent with the h5py library we are using, which is probably why it was implemented with that order in the first place. Do you agree @eoinell @mjh250 ? I'm happy to make the commit for that.