Open mavam opened 6 months ago
DuckDB does partitioned writes with Hive partitioning, and also supports reading such partitions as follows:
SELECT * FROM read_parquet('orders/*/*/*.parquet', hive_partitioning = 1);
Users most likely don't have to even worry about this:
By default the system tries to infer if the provided files are in a hive partitioned hierarchy. And if so, the
hive_partitioning
flag is enabled automatically. The autodetection will look at the names of the folders and search for a'key' = 'value'
pattern. This behaviour can be overridden by setting thehive_partitioning
flag manually.
Given that the Hive partitioning is a quasi cross-tool standard in the data community, and that many data tools support it OOTB, we should start with this approach, as it maximizes interoperability and simplicity. For example, Arrow also supports reading partitioned datasets.
When processing large streams of files, we need the ability to cut them at a specific point. Today, users have to rely on thirty tools for this, such as
logrotate
. There are two obvious ways to do this: