apache / iceberg

Apache Iceberg
https://iceberg.apache.org/
Apache License 2.0
6.2k stars 2.16k forks source link

Iceberg Read is not working on Iceberg Hive table #11168

Open AwasthiSomesh opened 5 days ago

AwasthiSomesh commented 5 days ago

Apache Iceberg version

1.6.1 (latest release)

Query engine

Hive

Please describe the bug 🐞

Insert is working fine and in hdfs also the .parquet file is getting generated and when I decode that parquet file using parquet-tools, I can see the data which I wrote using insert query. But when I perform select operation, it show no data. Here is the output: aken: 0.001 seconds +----------------------------+ | excercise_hive_eceberg1.i | +----------------------------+ +----------------------------+

Here is desc formatted ;

+-------------------------------+----------------------------------------------------+----------------------------------------------------+ | col_name | data_type | comment | +-------------------------------+----------------------------------------------------+----------------------------------------------------+ | i | int | | | | NULL | NULL | | # Detailed Table Information | NULL | NULL | | Database: | somesh_dev | NULL | | OwnerType: | USER | NULL | | Owner: | hive | NULL | | CreateTime: | Thu Sep 19 08:40:33 UTC 2024 | NULL | | LastAccessTime: | UNKNOWN | NULL | | Retention: | 0 | NULL | | Location: | s3a://com.somesh/opt/hive/data/warehouse/somesh_dev.db/excercise_hive_eceberg1 | NULL | | Table Type: | EXTERNAL_TABLE | NULL | | Table Parameters: | NULL | NULL | | | EXTERNAL | TRUE | | | TRANSLATED_TO_EXTERNAL | TRUE | | | bucketing_version | 2 | | | current-schema | {\"type\":\"struct\",\"schema-id\":0,\"fields\":[{\"id\":1,\"name\":\"i\",\"required\":false,\"type\":\"int\"}]} | | | external.table.purge | TRUE | | | format-version | 2 | | | iceberg.orc.files.only | false | | | metadata_location | s3a://com.somesh/opt/hive/data/warehouse/somesh_dev.db/excercise_hive_eceberg1/metadata/00000-eb95ea40-7408-4368-8204-baaf3960fd41.metadata.json | | | numFiles | 0 | | | numRows | 0 | | | parquet.compression | zstd | | | rawDataSize | 0 | | | serialization.format | 1 | | | snapshot-count | 0 | | | storage_handler | org.apache.iceberg.mr.hive.HiveIcebergStorageHandler | | | table_type | ICEBERG | | | totalSize | 0 | | | transient_lastDdlTime | 1726737447 | | | uuid | 3a828192-5f28-4d20-9be0-841f078f60e4 | | | NULL | NULL | | # Storage Information | NULL | NULL | | SerDe Library: | org.apache.iceberg.mr.hive.HiveIcebergSerDe | NULL | | InputFormat: | org.apache.iceberg.mr.hive.HiveIcebergInputFormat | NULL | | OutputFormat: | org.apache.iceberg.mr.hive.HiveIcebergOutputFormat | NULL | | Compressed: | No | NULL | | Sort Columns: | [] | NULL | +-------------------------------+----------------------------------------------------+----------------------------------------------------+ 38 rows selected (1.743 seconds) 0: jdbc:hive2://localhost:10000/>

Please help here if anyone konow this solution.

Willingness to contribute

manuzhang commented 5 days ago

what's your hive version?

AwasthiSomesh commented 4 days ago

@manuzhang its hive 4.0.0

manuzhang commented 2 days ago

Hive 4.0.0 integration is maintained at Hive side. It might be better asking at Hive community.