dlt-hub / dlt

data load tool (dlt) is an open source Python library that makes data loading easy 🛠️
https://dlthub.com/docs
Apache License 2.0
2.65k stars 176 forks source link

Allow easier partitioning and compaction in Delta tables written to filesystem #2062

Open arjun-panchmatia-mechademy opened 3 days ago

arjun-panchmatia-mechademy commented 3 days ago

Feature description

It is currently possible to set partitioning strategy when writing to deltalake on Cloud / local. This is achieved by passing relevant parameters to the decorator / applying hints. However, it doesn't look like the filesystem destination natively supports complex partitioning strategies. Example: given a resource that emits timestamps, it is not possible to achieve granular partitioning based on year, month, and day.

Considering how common the above use-case is, it'd be very useful to have. A current workaround involves creating the year, month, and day columns in the resource itself and then using those. However, for smaller tables (as is the case with time-series data), that would incur needless storage and compute costs.

Compaction would also be nice to have, considering near real-time tables tend to have very frequent writes with each file being small in nature. This quickly makes it very hard to query data when using something like polars or duckdb to read directly from the deltalake.

Are you a dlt user?

None

Use case

We're precisely trying to find a way to partition our data without appending additional fields, as described above.

Proposed solution

I am unsure of what the syntax for this would look like, but considering how common this usecase is, I believe datetime object-specific checks could probably be integrated?

Related issues

No response