Open malhotrashivam opened 1 month ago
We should also see if there is any specific guidance on metadata we should be writing down; in the case of writing a pyarrow table using pyiceberg, we've noticed that the metadata key ARROW:schema
contains the arrow schema; in the case of pyspark, it wrote a metadata key iceberg.schema
that contains the iceberg schema.
The spec is being developed along with the development work but at a higher level, the decided APIs look like:
void append(Tables...)
orappendDataFiles/appendTables
: writes tables to data files 1:1, does a transaction to add new data filesvoid overwrite(Tables...)
: writes tables to data files 1:1, does a transaction to remove all data files and add new onesList<URI> write(Tables…)
: writes tables to data files 1:1, does not put anything in transactionAn important requirement is that we need to persist the Iceberg schema element field-ids into the parquet schema Type field_id field, to map iceberg columns to parquet columns.