pydata / xarray

N-D labeled arrays and datasets in Python
https://xarray.dev
Apache License 2.0
3.61k stars 1.08k forks source link

Writeable backends via entrypoints #5954

Open rabernat opened 2 years ago

rabernat commented 2 years ago

The backend refactor has gone a long way towards making it easier to implement custom backend readers via entry points. However, it is still not clear how to implement a writeable backend from a third party package as an entry point. Some of the reasons for this are:

jhamman commented 2 years ago

Thanks @rabernat for opening up this issue. I think now that the refactor for read support is completed, it is a great time to discuss the opportunities for adding write support to the plugin interface.

pinging @aurghs and @alexamici since I know they have some thoughts developed here.

abkfenris commented 2 years ago

Is ds.to( the most discoverable method for users?

What about making it so that backends can add methods to ds.to, so ds.to.netcdf() or ds.to.tile_db() based on what backends are installed? That way they might not have to guess as much as to what engine and file types can be written.

Illviljan commented 2 years ago

Another option is using a similiarly named store function as the read functions:

xr.open_dataset(...)
xr.store_dataset(...)
keewis commented 2 years ago

If we do that, I'd call it save_dataset to be consistent with {open,save}_mfdataset

alexamici commented 2 years ago

@rabernat and all, at the time of the read-only backend refactor @aurghs and I spent quite some time analysing write support and thinking of a unifying strategy. This is my interpretation of our findings:

  1. one of the big advantages of the unified xr.open_dataset API is that you don't need to specify the engine of the input data and you can rely on xarray guessing it. This is in general non true when you write your data, as you care about what format you are storing it.

  2. another advantage of xr.open_dataset is that xarray manages all the functionaries related to dask and to in-memory cacheing, so backends only need to know how to lazily read from the storage. Current (rather complex) implementation has support for writing from dask and distributed workers but most backends serialise writes anyway, so the advantage is limited. This is not to say that it is not worth, but the cost / benefit ratio of supporting potentially distributed writes is much lower than read support.

  3. that said, I'd really welcome a unified write API like ds.save(engine=...) or even xr.save_dataset(ds, engine=...) with a engine keyword argument and possibly other common options. Adding support for a single save_dataset entry point to the backend API is trivial, but adding full support for possibly distributed writes looks like it is much more work.

Also note that ATM @aurghs and I are overloaded at work and we would have very little time that we can spend on this :/

rabernat commented 2 years ago

Thanks for the info @alexamici!

2. but most backends serialise writes anyway, so the advantage is limited.

I'm not sure I understand this comment, specifically what is meant by "serialise writes". I often use Xarray to do distributed writes to Zarr stores using 100+ distributed dask workers. It works great. We would need the same thing from a TileDB backend.

We are focusing on the user-facing API, but in the end, whether we call it .to, .to_dataset, or .store_dataset is not really a difficult or important question. It's clear we need some generic writing method. The much harder question is the back-end API. As Alessandro says:

Adding support for a single save_dataset entry point to the backend API is trivial, but adding full support for possibly distributed writes looks like it is much more work.

alexamici commented 2 years ago
  1. but most backends serialise writes anyway, so the advantage is limited.

I'm not sure I understand this comment, specifically what is meant by "serialise writes". I often use Xarray to do distributed writes to Zarr stores using 100+ distributed dask workers. It works great. We would need the same thing from a TileDB backend.

I should have added "except Zarr" 😅 .

All netCDF writers use xr.backends.locks.get_write_lock to get a scheduler appropriate writing lock. The code is intricate and I don't find where to point you, but as I recall the lock was used so only one worker/process/thread could write to disk at a time.

Concurrent writes a la Zarr are awesome and xarray supports them now, so my point was: we can add non-concurrent write support to the plugin architecture quite easily and that will serve a lot of users. But supporting Zarr and other advanced backends via the plugin architecture is a lot more work.