NeurodataWithoutBorders / pynwb

A Python API for working with Neurodata stored in the NWB Format
https://pynwb.readthedocs.io
Other
174 stars 85 forks source link

[Feature]: Make `get_data_in_units` not load entire array into memory #1881

Open rly opened 3 months ago

rly commented 3 months ago

What would you like to see added to PyNWB?

As mentioned in #1880, get_data_in_units() loads the entire dataset into memory. For large datasets, that is impractical and will silently blow up a user's RAM.

Is your feature request related to a problem?

No response

What solution would you like?

What do you think about supporting the syntax timeseries.data_in_units[1000:2000, 5:10], i.e., adding a simple wrapper class WrappedArray that defines __getitem__ and delegates the slice argument to the underlying list / numpy array / h5py.Dataset / zarr.Array object.

We can reuse this wrapper class elsewhere to help with addressing slicing differences between different array backends (https://github.com/NeurodataWithoutBorders/pynwb/issues/1702) and improving performance in h5py slicing (https://github.com/h5py/h5py/issues/293). As mentioned in https://github.com/NeurodataWithoutBorders/pynwb/issues/1702, full unification of these libraries is outside the scope of this project, but I think providing this wrapper class with its few enhancements would only help.

If we do this, the wrapper class would probably live in HDMF.

Do you have any interest in helping implement the feature?

Yes.

Code of Conduct

h-mayorquin commented 3 months ago

Interesting idea. I am personally curios about how the implementation of WrappedArray would look like.

Another alternative is to pass a slice as an argument to get_data_in_units but that way the expresiveness of getitem that most people know from numpy is lost.