Open duc-dn opened 12 months ago
seems like its not capturing any table. could you try following setting? is your database named postgres
?
debezium.source.database.dbname=postgres
debezium.source.schema.include.list=public
@ismailsimsek, thanks. I tried to your config the above with Hadoop catalog and saved data in local and it works. However, I used the hive metastore catalog, I got an error
2023-08-10 09:26:54,123 ERROR [io.deb.ser.ConnectorLifecycle] (pool-6-thread-1) Connector completed: success = 'false', message = 'Stopping connector after error in the application's handler method: Failed to create file: file:/user/hive/warehouse/test_ducdn_public_student/metadata/00000-d043d821-4ad7-4e28-9400-466953e631dd.metadata.json', error = 'org.apache.iceberg.exceptions.RuntimeIOException: Failed to create file: file:/user/hive/warehouse/test_ducdn_public_student/metadata/00000-d043d821-4ad7-4e28-9400-466953e631dd.metadata.json': org.apache.iceberg.exceptions.RuntimeIOException: Failed to create file: file:/user/hive/warehouse/test_ducdn_public_student/metadata/00000-d043d821-4ad7-4e28-9400-466953e631dd.metadata.json
at org.apache.iceberg.hadoop.HadoopOutputFile.createOrOverwrite(HadoopOutputFile.java:87)
at org.apache.iceberg.TableMetadataParser.internalWrite(TableMetadataParser.java:124)
at org.apache.iceberg.TableMetadataParser.overwrite(TableMetadataParser.java:114)
at org.apache.iceberg.BaseMetastoreTableOperations.writeNewMetadata(BaseMetastoreTableOperations.java:170)
at org.apache.iceberg.BaseMetastoreTableOperations.writeNewMetadataIfRequired(BaseMetastoreTableOperations.java:160)
at org.apache.iceberg.hive.HiveTableOperations.doCommit(HiveTableOperations.java:185)
at org.apache.iceberg.BaseMetastoreTableOperations.commit(BaseMetastoreTableOperations.java:135)
at org.apache.iceberg.BaseMetastoreCatalog$BaseMetastoreCatalogTableBuilder.create(BaseMetastoreCatalog.java:199)
at io.debezium.server.iceberg.IcebergUtil.createIcebergTable(IcebergUtil.java:109)
at io.debezium.server.iceberg.IcebergChangeConsumer.lambda$loadIcebergTable$1(IcebergChangeConsumer.java:192)
at java.base/java.util.Optional.orElseGet(Optional.java:369)
at io.debezium.server.iceberg.IcebergChangeConsumer.loadIcebergTable(IcebergChangeConsumer.java:188)
at io.debezium.server.iceberg.IcebergChangeConsumer.handleBatch(IcebergChangeConsumer.java:166)
at io.debezium.embedded.ConvertingEngineBuilder.lambda$notifying$2(ConvertingEngineBuilder.java:101)
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:912)
at io.debezium.embedded.ConvertingEngineBuilder$2.run(ConvertingEngineBuilder.java:229)
at io.debezium.server.DebeziumServer.lambda$start$1(DebeziumServer.java:170)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.io.IOException: Mkdirs failed to create file:/user/hive/warehouse/test_ducdn_public_student/metadata (exists=false, cwd=file:/home/ducdn/Desktop/workspace/debezium-server-iceberg-dist-0.3.0-SNAPSHOT/debezium-server-iceberg)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:515)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:500)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1195)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1175)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1064)
at org.apache.iceberg.hadoop.HadoopOutputFile.createOrOverwrite(HadoopOutputFile.java:85)
... 19 more
debezium.sink.iceberg.warehouse=s3a://datalake/warehouse
but it seem to ignore this config. Can you give some recommendations about this problem??It might not be connecting to Mino and hive server.
You could check the hostnames. If you are using docker containers, hostnames might be localhost and random port. http://minio:9000
and thrift://hive-metastore:9083
@ismailsimsek, I am running containers in docker compose and I tried to use localhost
, minio
, and hive-metastore
hosts. In addition, I fixed the host in /etc/hosts
However, I got an error as the above error
could you share your docker compose code? another thing to look into is hive and minio integration, metastore-site.xml settings. leaving here one example
I followed the tutorial, using debezium-server-iceberg to read data from postgres and saved to Iceberg format. However, in the log, I don't see ingestion taking place and I don't any error in the log. In minio, I don't see the data saved as iceberg table. This a is config file
Log: