pydata / xarray

N-D labeled arrays and datasets in Python
https://xarray.dev
Apache License 2.0
3.57k stars 1.07k forks source link

Parallel + multi-threaded reading of NetCDF4 + HDF5: Hidefix! #7446

Open gauteh opened 1 year ago

gauteh commented 1 year ago

What is your issue?

Greetings,

I have developed a parallel or multi-threaded (and even async) reader for HDF5 and NetCDF4 files. It is still at a somewhat experimental stage (and does not support all compressions etc), but has been tested a fair bit by now. The reader is written in Rust with Python bindings:

https://github.com/gauteh/hidefix (pending conda package: https://github.com/conda-forge/staged-recipes/pull/21742)

Regular NetCDF4 and HDF5 is not thread-safe, and there's a global process-wide lock for reading files. With hidefix this lock is removed. This would allow parallel reading of datasets to be done in the same process, as opposed to split across processes. Additionally, the reader can read directly into the target buffer and thus avoids a cache for decoded chunks (effectively reducing memory usage and chunk re-decoding).

The reader works by indexing the chunks of a dataset so that chunks can be accessed independently.

I have created a basic xarray backend, combined with the NetCDF4 backend for reading attributes etc: https://github.com/gauteh/hidefix/blob/main/python/hidefix/xarray.py and it works pretty well for reading:

py_hidefix_bench

on my laptop with 8 CPUs we get 6x speed-up over the xarray NetCDF4 backend (reading a 380mb variable)! On larger machines the speed-up is even greater (if you want to control the number of CPUs set the RAYON_NUM_THREADS env variable).

Running benchmarks along the lines of:

import xarray as xr

i = xr.open_dataset('tests/data/barents_zdepth_m00_FC.nc', engine='hidefix')
d = i['v']
v = d[...].values
print(v.shape, type(v))

for the different backends (with or without xarray):

Screenshot from 2023-01-17 09-48-44

At this point it turns out that a significant point of time was spent setting the _FillValue for the returned array (less important for NetCDF4 since the reader took much longer time anyway), this could also be done in rust in parallel: https://github.com/gauteh/hidefix/blob/main/src/python/mod.rs#L128 . Reducing it to a negligible amount of time. This can also be used on the existing xarray NetCDF4 backend.

I hope this can be of general interest, and if it would be of interest to move the hidefix xarray backend into xarray that would be very cool.

Best regards, Gaute

rabernat commented 1 year ago

Hi @gauteh! This is very cool! Thanks for sharing. I'm really excited about way that Rust can be used to optimized different parts of our stack.

A couple of questions:

I hope this can be of general interest, and if it would be of interest to move the hidefix xarray backend into xarray that would be very cool.

This is definitely of general interest! However, it is not necessary to add a new backend directly into xarray. We support entry points which allow packages to implement their own readers, as you have apparently already discovered: https://docs.xarray.dev/en/stable/internals/how-to-add-new-backend.html

Installing your package should be enough to enable the new engine.

We would, however, welcome a documentation PR that described how to use this package on the I/O page.

gauteh commented 1 year ago

On Tue, Jan 17, 2023 at 5:23 PM Ryan Abernathey @.***> wrote:

Hi @gauteh https://github.com/gauteh! This is very cool! Thanks for sharing. I'm really excited about way that Rust can be used to optimized different parts of our stack.

A couple of questions:

-

Can your reader read over HTTP / S3 protocol? Or is it just local files?

It is built to do this, but I haven't implemented it. I initially wrote it for an OpenDAP server (dars: https://github.com/gauteh/dars), where the plan is to also support files stored in the cloud. So the hidefix-reader can read from any interface that supports ReadAt or Read + Seek. It would probably be beneficial to index the files beforehand. I submitted a patch to HDF5 that allows it to iterate over the chunks quickly, so indexing a 5-6 GB file takes only a couple of hundred ms - so I no longer store the index for local files. It is still faster than native HDF5 including the indexing.

-

Do you know about kerchunk https://fsspec.github.io/kerchunk/? The approach you described:

The reader works by indexing the chunks of a dataset so that chunks can be accessed independently.

...is identical to the approach taken by Kerchunk (although the implementation is different). I'm curious what specification you use to store your indexes. Could we make your implementation interoperable with kerchunk, such that a kerchunk reference specification could be read by your reader? It would be great to reach for some degree of alignment here.

The index is serializable using the rust serde system, so it can be stored in any format supported by that. A fair amount of effort went into making the deserialization zero-copy: that means that I can read the e.g. 10mb index for a 5-6gb file very quickly, but it requires very little deserialization since the read buffers are already memory-mapped to the structures making it very fast. I don't have a specific format at the moment, but I have used bincode a lot in e.g. dars.

-

Do you know about hdf5-coro - http://icesat2sliderule.org/h5coro/ - they have similar goals, but focused on cloud-based access

I hope this can be of general interest, and if it would be of interest to move the hidefix xarray backend into xarray that would be very cool.

This is definitely of general interest! However, it is not necessary to add a new backend directly into xarray. We support entry points which allow packages to implement their own readers, as you have apparently already discovered: https://docs.xarray.dev/en/stable/internals/how-to-add-new-backend.html

Installing your package should be enough to enable the new engine.

We would, however, welcome a documentation PR that described how to use this package on the I/O page.

Great, the package should already register itself with xarray.

dcherian commented 1 year ago

@gauteh is there anything we can help with.

cc @scottyhq @betolink

gauteh commented 1 year ago

Yeah, lots to do :D In what way were you thinking?

dcherian commented 1 year ago

The conda package is stalling, which I think limits stuff

cc @ocefpaf

Docs on xarray

I think we could advertise hidefix and h5netcdf more here We just have a "tip" at the moment.

ocefpaf commented 1 year ago

The conda package is stalling, which I think limits stuff

cc @ocefpaf

There is a pending review. Please address the reviewer comments in that PR and/or explain why you cannot.

gauteh commented 1 year ago

The conda package is stalling, which I think limits stuff

cc @ocefpaf

There is a pending review. Please address the reviewer comments in that PR and/or explain why you cannot.

When I click the review thing it doesn't show anything anymore. I think it is on some outdated code.

gauteh commented 1 year ago

By the way. Applying fill_value and similar is taking a significant time of loading netcdf-files. By doing this in Rust in a parallel way things go much faster. This could be used for the regular NetCDF reader as well, and it would probably save tens of seconds for large datasets:

dcherian commented 1 year ago

@gauteh see https://github.com/nsidc/earthaccess/discussions/251