An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
Support for categorical / enum / dictionary data type in delta tables and spark dataframes. This seems to be generally possible with parquet. Example data types: Arrow (Dictionary), pandas (categorical), duckdb (enum).
Motivation
This would be useful to save space for large tables with low-cardinality columns, while also achieving simplicity by not having to create a separate dimension table and a PK - FK relationship just for a couple of value mapping.
Further details
Willingness to contribute
The Delta Lake Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature?
[ ] Yes. I can contribute this feature independently.
[ ] Yes. I would be willing to contribute this feature with guidance from the Delta Lake community.
[ X ] No. I cannot contribute this feature at this time.
Feature request
Which Delta project/connector is this regarding?
Overview
Support for categorical / enum / dictionary data type in delta tables and spark dataframes. This seems to be generally possible with parquet. Example data types: Arrow (Dictionary), pandas (categorical), duckdb (enum).
Motivation
This would be useful to save space for large tables with low-cardinality columns, while also achieving simplicity by not having to create a separate dimension table and a PK - FK relationship just for a couple of value mapping.
Further details
Willingness to contribute
The Delta Lake Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature?