delta-io / delta

An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs
https://delta.io
Apache License 2.0
7.58k stars 1.7k forks source link

[kernel][Feature Request] Row based Parquet Writer with flush capability #3256

Open Sandy3094 opened 4 months ago

Sandy3094 commented 4 months ago

Feature request

Which Delta project/connector is this regarding?

Overview

Delta Kernel Default Engine's parquet writer accepts FilteredColumnarBatch. Is there any plan to have a Row based parquet writer? For example org.apache.parquet.hadoop.ParquetWriter writes row by row and flushes the data to parquet file if a threshold is reached(rowGroupSize). This setting is to make sure we don't hit out of JVM heap space. Is there any similar mechanism while using Delta Kernel's Default parquet writer?

Further details

As a workaround I tried to use org.apache.parquet.hadoop.ParquetWriter to write to parquet files and use Delta Kernel to commit. But to use apache ParquetWriter we have to convert StructType to MessageType for which ParquetSchemaUtils.toParquetSchema needs to be exposed publicly. Delta-standalone has ParquetSchemaConverter but kernel doesn't have a publicly accessible converter.

Willingness to contribute

The Delta Lake Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature?

vkorukanti commented 4 months ago

@Sandy3094 We have a config that you can pass in the Configuration object used in creating DefaultEngine.

delta.kernel.default.parquet.writer.targetMaxFileSize. Currently this is kind of private. If this is what you need, we can document this on the DefaultParquetHandler and DefaultEngine docs. Let me know.