Open bjhardcastle opened 2 months ago
I'd like to be able to add data from an existing zarr store, with the same compression it used originally, in an efficient way
It would be useful if you could provide a code example of how you are copying the data right now or pseudo-code to show more specifically which kind of data you need to copy. E.g., Is this for individual datasets you want to copy or is this for more complex structures, e.g., an entire ElectricalSeries
in NWB?
Brainstorming: Some ideas for implementing the requested feature
For individual datasets, e.g. ElectricalSeries.data
, I think this could be done either:
ZarrDataIO
. We would need to enhance ZarrDataIO
to allow us to specify that the source Zarr dataset should be copied using the copy_store
approach. Similar to how we have a ZarrDataIO.from_h5py_dataset
we could then also have a ZarrDataIO.from_zarr_array
method. I think this approach should not be too complicated to implement.ZarrIO.write_dataset
to automatically detect that we need to copy a Zarr dataset to use copy_store
instead. ZarrDataIO
For whole groups, e.g., an entire ElectricalSeries
, I believe this is something we would need to do define logic in ZarrIO.write_group
to detect that a whole Builder needs to be copied. However, I'm not sure right now what the logic needs to look like to detect this case from the Builder.
Right, sorry. I was talking about individual datasets.
Doing something like this works, and the zarr data are not read eagerly, which is great:
import datetime
import pynwb
import zarr
nwb = pynwb.NWBFile(
session_id='test',
session_description='test',
identifier='12345',
session_start_time=(
datetime.datetime.now()
),
processing=[
pynwb.TimeSeries(
name='running_speed',
data=zarr.open('s3://test/running_speed.zarr'),
unit='m/s',
rate=60.0,
starting_time=0.0,
)
],
)
If the intermediate zarr data were written with some optimized chunking/compression config, I'd just like to include the data as-is, rather than re-do the compression on write to NWB-zarr.
Thanks for the clarification and helpful example. In that case I would propose the following approach:
bool
flag copy_zarr_store
to ZarrDataIO
ZarrIO.write_dataset
to use zarr.convenience.copy_store to copy the data if the data is wrapped in ZarrDataIO
and copy_zarr_store
is set to trueZarrIO.from_zarr_store(...)
to wrap and existing zarr dataset with copy_zarr_store=True
set. But I don't think this is needed here because it really doesn't save much.With this, your example would change to:
pynwb.TimeSeries(
name='running_speed',
data=ZarrDataIO(data=zarr.open('s3://test/running_speed.zarr'), copy_zarr_store=True)
...
@bjhardcastle is this something you are interested in contributing a PR for?
@oruebel Yes potentially, just not sure when I'll get time to work on it.
What would you like to see added to HDMF-ZARR?
I'd like to be able to add data from an existing zarr store, with the same compression it used originally, in an efficient way.
As I understand it at the moment, with
use_links=False
the data is downloaded, uncompressed, then written to the new NWB's zarr store with a new compression configuration.Is your feature request related to a problem?
No response
What solution would you like?
The zarr library has a function doing a more efficient version of this, which copies the original compressed data directly: https://zarr.readthedocs.io/en/stable/api/convenience.html#zarr.convenience.copy_store Could it be suitable for use when such zarr->zarr copy operations are detected?
Do you have any interest in helping implement the feature?
Yes.
Code of Conduct