Open rabernat opened 2 years ago
Thanks @rabernat for opening up this issue. I think now that the refactor for read support is completed, it is a great time to discuss the opportunities for adding write support to the plugin interface.
pinging @aurghs and @alexamici since I know they have some thoughts developed here.
Is ds.to(
the most discoverable method for users?
What about making it so that backends can add methods to ds.to
, so ds.to.netcdf()
or ds.to.tile_db()
based on what backends are installed? That way they might not have to guess as much as to what engine and file types can be written.
Another option is using a similiarly named store function as the read functions:
xr.open_dataset(...)
xr.store_dataset(...)
If we do that, I'd call it save_dataset
to be consistent with {open,save}_mfdataset
@rabernat and all, at the time of the read-only backend refactor @aurghs and I spent quite some time analysing write support and thinking of a unifying strategy. This is my interpretation of our findings:
one of the big advantages of the unified xr.open_dataset
API is that you don't need to specify the engine
of the input data and you can rely on xarray guessing it. This is in general non true when you write your data, as you care about what format you are storing it.
another advantage of xr.open_dataset
is that xarray manages all the functionaries related to dask and to in-memory cacheing, so backends only need to know how to lazily read from the storage. Current (rather complex) implementation has support for writing from dask and distributed workers but most backends serialise writes anyway, so the advantage is limited. This is not to say that it is not worth, but the cost / benefit ratio of supporting potentially distributed writes is much lower than read support.
that said, I'd really welcome a unified write API like ds.save(engine=...)
or even xr.save_dataset(ds, engine=...)
with a engine
keyword argument and possibly other common options. Adding support for a single save_dataset
entry point to the backend API is trivial, but adding full support for possibly distributed writes looks like it is much more work.
Also note that ATM @aurghs and I are overloaded at work and we would have very little time that we can spend on this :/
Thanks for the info @alexamici!
2. but most backends serialise writes anyway, so the advantage is limited.
I'm not sure I understand this comment, specifically what is meant by "serialise writes". I often use Xarray to do distributed writes to Zarr stores using 100+ distributed dask workers. It works great. We would need the same thing from a TileDB backend.
We are focusing on the user-facing API, but in the end, whether we call it .to
, .to_dataset
, or .store_dataset
is not really a difficult or important question. It's clear we need some generic writing method. The much harder question is the back-end API. As Alessandro says:
Adding support for a single save_dataset entry point to the backend API is trivial, but adding full support for possibly distributed writes looks like it is much more work.
- but most backends serialise writes anyway, so the advantage is limited.
I'm not sure I understand this comment, specifically what is meant by "serialise writes". I often use Xarray to do distributed writes to Zarr stores using 100+ distributed dask workers. It works great. We would need the same thing from a TileDB backend.
I should have added "except Zarr" 😅 .
All netCDF writers use xr.backends.locks.get_write_lock
to get a scheduler appropriate writing lock. The code is intricate and I don't find where to point you, but as I recall the lock was used so only one worker/process/thread could write to disk at a time.
Concurrent writes a la Zarr are awesome and xarray supports them now, so my point was: we can add non-concurrent write support to the plugin architecture quite easily and that will serve a lot of users. But supporting Zarr and other advanced backends via the plugin architecture is a lot more work.
The backend refactor has gone a long way towards making it easier to implement custom backend readers via entry points. However, it is still not clear how to implement a writeable backend from a third party package as an entry point. Some of the reasons for this are:
While our reading function (
open_dataset
) has a generic name, our writing functions (Dataset.to_netcdf
/Dataset.to_zarr
) are still format specific. (Related to https://github.com/pydata/xarray/issues/3638). I propose we introduce a genericDataset.to
method and deprecate the others.The
BackendEntrypoint
base class does not have a writing method, justopen_dataset
: https://github.com/pydata/xarray/blob/e0deb9cf0a5cd5c9e3db033fd13f075added9c1e/xarray/backends/common.py#L356-L370 (Related to https://github.com/pydata/xarray/issues/1970)As a result, writing is implemented ad-hoc for each backend.
This makes it impossible for a third-party package to to implement writing.
We should fix this situation! Here are the steps I would take.
[ ] Decide on the desired API for writeable backends.
[ ] Formalize this in the
BackendEntrypoint
base class.[ ] Refactor the existing writeable backends (netcdf4-python, h5netcdf, scipy, Zarr) to use this API
[ ] Maybe deprecate
to_zarr
andto_netcdf
(or at least refactor to make a shallow call to a generic method)[ ] Encourage third party implementors to try it (e.g. TileDB)