I am using flink to write data into iceberg table, here's table definition and testing data
CREATE TABLE test (
id INT,
name STRING,
PRIMARY KEY(`id`) NOT ENFORCED
) WITH (
'connector' = 'iceberg',
'catalog-name'='hadoop_catalog',
'catalog-type'='hadoop',
'warehouse'='s3a://umbrella/hadoop/warehouse',
'table-name' = 'test',
'format-version'='2',
'write.upsert.enabled'='true'
);
INSERT INTO test VALUES (1, 'A'), (2, 'B'), (3, 'C');
In duckdb side, It could successfully read iceberg data from s3
D select * FROM iceberg_scan('s3://umbrella/hadoop/warehouse/default_database/test');
┌───────┬─────────┐
│ id │ name │
│ int32 │ varchar │
├───────┼─────────┤
│ 1 │ A │
│ 2 │ B │
│ 3 │ C │
└───────┴─────────┘
Then update record whose id = 3 by using upsert in flink
INSERT INTO test /*+ OPTIONS('upsert-enabled'='true') */ VALUES (3, 'C-C')
But I got ' Binder Error: Table "iceberg_scan_deletes" does not have a column named "file_path" ' when querying it again in duckdb
D select * FROM iceberg_scan('s3://umbrella/hadoop/warehouse/default_database/test');
Binder Error: Table "iceberg_scan_deletes" does not have a column named "file_path"
I am using flink-1.18.1 + iceberg-1.5.2 + duckdb-1.0.0, I suppose this is caused by version compatibility ? What's the recommanded iceberg version in duckdb-1.0.0 ? Thanks!
I am using flink to write data into iceberg table, here's table definition and testing data
In duckdb side, It could successfully read iceberg data from s3
Then update record whose id = 3 by using upsert in flink
But I got ' Binder Error: Table "iceberg_scan_deletes" does not have a column named "file_path" ' when querying it again in duckdb
I am using flink-1.18.1 + iceberg-1.5.2 + duckdb-1.0.0, I suppose this is caused by version compatibility ? What's the recommanded iceberg version in duckdb-1.0.0 ? Thanks!