JackKelly / light-speed-io

Read & decompress many chunks of files at high speed
MIT License
42 stars 0 forks source link

Multi-dataset abstraction layer #142

Open JackKelly opened 1 week ago

JackKelly commented 1 week ago

Maybe have a layer which sits above multiple datasets. Those datasets could be in any format (zarr, grib, etc.) and live anywhere (maybe some datasets are on local disks, some are in cloud object storage). Possibly some data is duplicated to optimise for different read patterns (see #141).

Users would query the "multi-dataset layer". When reading, the "multi-dataset layer" would select which underlying dataset to use for a given query, and could merge multiple datasets (e.g. NWP and satellite).

Perhaps this layer could also be responsible for keeping multiple on-disk datasets up-to-date when new data comes along (e.g. duplicating new data to two different datasets, which are optimised for different read patterns). But maybe that's best kept disaggregated as something the user can schedule in a data orchestration tool like Dagster.

Also, maybe the layer could automatically figure out when it'd be worth creating a new "optimised" dataset. e.g. the layer would keep track of the read patterns that it's used for.

Maybe this fits into "layer 5: applications"?

Related

JackKelly commented 6 days ago

It might be best to store multiple representations of a given dataset at creation time, rather than first creating a dense Zarr, and then creating a differently chunked dataset from that Zarr. So you could imagine wanting to pipe data into multiple drains in parallel.

JackKelly commented 22 hours ago

Better analogy: YouTube stores each video multiple times, each with a different compression setting and resolution. Let's do the same for ndim arrays!