Open night2201 opened 10 months ago
@night2201 could you enable debug log for this class. then you can see more logs
quarkus.log.category."io.debezium.server.iceberg.batchsizewait".level"=DEBUG
also it is not waiting if snapshot process is running (to increase consuming speed during snapshot) https://github.com/memiiso/debezium-server-iceberg/blob/5819c1c840e8bf6f92d8545ec0091d2b09c5273d/debezium-server-iceberg-sink/src/main/java/io/debezium/server/iceberg/batchsizewait/MaxBatchSizeWait.java#L52-L54
quarkus.log.category."io.debezium.server.iceberg.batchsizewait".level=DEBUG
and it works as I expect
Above, I tried to Postgres SQL successfully.debezium.source.connector.class=io.debezium.connector.mysql.MySqlConnector
debezium.source.offset.storage.file.filename=/tmp/offset1
debezium.source.offset.flush.interval.ms=0
debezium.source.database.hostname=10.159.19.102
debezium.source.database.port=3306
debezium.source.database.user=root
debezium.source.database.password=root
#debezium.source.database.dbname=mydb
debezium.source.database.server.name=mysql_cdc
debezium.source.database.include.list=mydb
debezium.source.topic.prefix=ducdn_icebergg
debezium.source.database.server.id=184054
debezium.source.schema.history.internal.kafka.bootstrap.servers=broker:29092
debezium.source.schema.history.internal.kafka.topic=ducdn_schema_changes.mydb
debezium.source.include.schema.changes=true
2023-09-08 09:50:38,956 INFO [io.deb.rel.RelationalSnapshotChangeEventSource] (pool-12-thread-1) Exporting data from table 'mydb.student' (1 of 2 tables)
2023-09-08 09:50:38,969 INFO [io.deb.rel.RelationalSnapshotChangeEventSource] (pool-12-thread-1) Finished exporting 3 records for table 'mydb.student' (1 of 2 tables); total duration '00:00:00.013'
2023-09-08 09:50:38,970 INFO [io.deb.rel.RelationalSnapshotChangeEventSource] (pool-12-thread-1) Exporting data from table 'mydb.test_user' (2 of 2 tables)
2023-09-08 09:50:38,979 INFO [io.deb.rel.RelationalSnapshotChangeEventSource] (pool-12-thread-1) Finished exporting 5 records for table 'mydb.test_user' (2 of 2 tables); total duration '00:00:00.009'
2023-09-08 09:50:38,983 INFO [io.deb.pip.sou.AbstractSnapshotChangeEventSource] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Snapshot - Final stage
2023-09-08 09:50:38,983 INFO [io.deb.pip.sou.AbstractSnapshotChangeEventSource] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Snapshot completed
2023-09-08 09:50:39,048 INFO [io.deb.pip.ChangeEventSourceCoordinator] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Snapshot ended with SnapshotResult [status=COMPLETED, offset=MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=binlog.000007, currentBinlogPosition=157, currentRowNumber=0, serverId=0, sourceTime=2023-09-08T09:50:37Z, threadId=-1, currentQuery=null, tableIds=[mydb.test_user], databaseName=mydb], snapshotCompleted=true, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=binlog.000007, restartBinlogPosition=157, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]]]
2023-09-08 09:50:39,055 INFO [io.deb.uti.Threads] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Requested thread factory for connector MySqlConnector, id = ducdn_icebergg named = binlog-client
2023-09-08 09:50:39,059 INFO [io.deb.pip.ChangeEventSourceCoordinator] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Starting streaming
2023-09-08 09:50:39,069 INFO [io.deb.con.mys.MySqlStreamingChangeEventSource] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Skip 0 events on streaming start
2023-09-08 09:50:39,069 INFO [io.deb.con.mys.MySqlStreamingChangeEventSource] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Skip 0 rows on streaming start
2023-09-08 09:50:39,070 INFO [io.deb.uti.Threads] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Creating thread debezium-mysqlconnector-ducdn_icebergg-binlog-client
2023-09-08 09:50:39,077 INFO [io.deb.uti.Threads] (blc-10.159.19.102:3306) Creating thread debezium-mysqlconnector-ducdn_icebergg-binlog-client
2023-09-08 09:50:39,091 INFO [com.git.shy.mys.bin.BinaryLogClient] (blc-10.159.19.102:3306) Connected to 10.159.19.102:3306 at binlog.000007/157 (sid:184054, cid:124)
2023-09-08 09:50:39,092 INFO [io.deb.con.mys.MySqlStreamingChangeEventSource] (blc-10.159.19.102:3306) Connected to MySQL binlog at 10.159.19.102:3306, starting at MySqlOffsetContext [sourceInfoSchema=Schema{io.debezium.connector.mysql.Source:STRUCT}, sourceInfo=SourceInfo [currentGtid=null, currentBinlogFilename=binlog.000007, currentBinlogPosition=157, currentRowNumber=0, serverId=0, sourceTime=2023-09-08T09:50:37Z, threadId=-1, currentQuery=null, tableIds=[mydb.test_user], databaseName=mydb], snapshotCompleted=true, transactionContext=TransactionContext [currentTransactionId=null, perTableEventCount={}, totalEventCount=0], restartGtidSet=null, currentGtidSet=null, restartBinlogFilename=binlog.000007, restartBinlogPosition=157, restartRowsToSkip=0, restartEventsToSkip=0, currentEventLengthInBytes=0, inTransaction=false, transactionId=null, incrementalSnapshotContext =IncrementalSnapshotContext [windowOpened=false, chunkEndPosition=null, dataCollectionsToSnapshot=[], lastEventKeySent=null, maximumKey=null]]
2023-09-08 09:50:39,096 INFO [io.deb.uti.Threads] (blc-10.159.19.102:3306) Creating thread debezium-mysqlconnector-ducdn_icebergg-binlog-client
2023-09-08 09:50:39,097 INFO [io.deb.con.mys.MySqlStreamingChangeEventSource] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Waiting for keepalive thread to start
2023-09-08 09:50:39,199 INFO [io.deb.con.mys.MySqlStreamingChangeEventSource] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Keepalive thread is running
2023-09-08 09:50:39,436 WARN [org.apa.had.hiv.con.HiveConf] (pool-6-thread-1) HiveConf of name hive.other.configs does not exist
2023-09-08 09:50:39,437 WARN [org.apa.had.hiv.con.HiveConf] (pool-6-thread-1) HiveConf of name hive.metastore.table.owner does not exist
2023-09-08 09:50:39,506 WARN [org.apa.had.uti.NativeCodeLoader] (pool-6-thread-1) Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2023-09-08 09:50:39,756 WARN [io.deb.ser.ice.IcebergUtil] (pool-6-thread-1) Table not found: default.dbz_mysql_ducdn_icebergg_mydb_test_user
2023-09-08 09:50:39,816 WARN [io.deb.ser.ice.IcebergUtil] (pool-6-thread-1) Creating table:'default.dbz_mysql_ducdn_icebergg_mydb_test_user'
schema:table {
1: user_id: required string (id)
2: name: optional string
3: created_at: optional long
4: test_addcol: optional int
5: updated_at: optional long
6: __op: optional string
7: __table: optional string
8: __source_ts_ms: optional timestamptz
9: __db: optional string
10: __deleted: optional string
}
rowIdentifier:[user_id]
2023-09-08 09:50:39,829 INFO [org.apa.ice.BaseMetastoreCatalog] (pool-6-thread-1) Table properties set at catalog level through catalog properties: {}
2023-09-08 09:50:39,845 INFO [org.apa.ice.BaseMetastoreCatalog] (pool-6-thread-1) Table properties enforced at catalog level through catalog properties: {}
2023-09-08 09:50:40,430 WARN [org.apa.had.met.imp.MetricsConfig] (pool-6-thread-1) Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
2023-09-08 09:50:42,273 INFO [org.apa.ice.hiv.HiveTableOperations] (pool-6-thread-1) Committed to table iceberg.default.dbz_mysql_ducdn_icebergg_mydb_test_user with the new metadata location s3a://datalake/lakehouse/dbz_mysql_ducdn_icebergg_mydb_test_user/metadata/00000-efaa3e4c-4d97-4ead-8da0-442c36eec369.metadata.json
2023-09-08 09:50:42,274 INFO [org.apa.ice.BaseMetastoreTableOperations] (pool-6-thread-1) Successfully committed to table iceberg.default.dbz_mysql_ducdn_icebergg_mydb_test_user in 2404 ms
2023-09-08 09:50:42,293 INFO [org.apa.ice.BaseMetastoreTableOperations] (pool-6-thread-1) Refreshing table metadata from new version: s3a://datalake/lakehouse/dbz_mysql_ducdn_icebergg_mydb_test_user/metadata/00000-efaa3e4c-4d97-4ead-8da0-442c36eec369.metadata.json
2023-09-08 09:50:43,885 INFO [org.apa.ice.hiv.HiveTableOperations] (pool-6-thread-1) Committed to table iceberg.default.dbz_mysql_ducdn_icebergg_mydb_test_user with the new metadata location s3a://datalake/lakehouse/dbz_mysql_ducdn_icebergg_mydb_test_user/metadata/00001-a98c514c-c0e1-43ec-91c0-edf064c6f297.metadata.json
2023-09-08 09:50:43,885 INFO [org.apa.ice.BaseMetastoreTableOperations] (pool-6-thread-1) Successfully committed to table iceberg.default.dbz_mysql_ducdn_icebergg_mydb_test_user in 187 ms
2023-09-08 09:50:43,885 INFO [org.apa.ice.SnapshotProducer] (pool-6-thread-1) Committed snapshot 5215099857656289668 (BaseRowDelta)
2023-09-08 09:50:43,893 INFO [org.apa.ice.BaseMetastoreTableOperations] (pool-6-thread-1) Refreshing table metadata from new version: s3a://datalake/lakehouse/dbz_mysql_ducdn_icebergg_mydb_test_user/metadata/00001-a98c514c-c0e1-43ec-91c0-edf064c6f297.metadata.json
2023-09-08 09:50:44,018 INFO [org.apa.ice.met.LoggingMetricsReporter] (pool-6-thread-1) Received metrics report: CommitReport{tableName=iceberg.default.dbz_mysql_ducdn_icebergg_mydb_test_user, snapshotId=5215099857656289668, sequenceNumber=1, operation=overwrite, commitMetrics=CommitMetricsResult{totalDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.615007354S, count=1}, attempts=CounterResult{unit=COUNT, value=1}, addedDataFiles=CounterResult{unit=COUNT, value=1}, removedDataFiles=null, totalDataFiles=CounterResult{unit=COUNT, value=1}, addedDeleteFiles=CounterResult{unit=COUNT, value=1}, addedEqualityDeleteFiles=CounterResult{unit=COUNT, value=1}, addedPositionalDeleteFiles=null, removedDeleteFiles=null, removedEqualityDeleteFiles=null, removedPositionalDeleteFiles=null, totalDeleteFiles=CounterResult{unit=COUNT, value=1}, addedRecords=CounterResult{unit=COUNT, value=3}, removedRecords=null, totalRecords=CounterResult{unit=COUNT, value=3}, addedFilesSizeInBytes=CounterResult{unit=BYTES, value=6372}, removedFilesSizeInBytes=null, totalFilesSizeInBytes=CounterResult{unit=BYTES, value=6372}, addedPositionalDeletes=null, removedPositionalDeletes=null, totalPositionalDeletes=CounterResult{unit=COUNT, value=0}, addedEqualityDeletes=CounterResult{unit=COUNT, value=3}, removedEqualityDeletes=null, totalEqualityDeletes=CounterResult{unit=COUNT, value=3}}, metadata={iceberg-version=Apache Iceberg 1.3.0 (commit 7dbdfd33a667a721fbb21c7c7d06fec9daa30b88)}}
2023-09-08 09:50:44,018 INFO [io.deb.ser.ice.tab.IcebergTableOperator] (pool-6-thread-1) Committed 3 events to table! s3a://datalake/lakehouse/dbz_mysql_ducdn_icebergg_mydb_test_user
2023-09-08 09:50:44,021 WARN [io.deb.ser.ice.IcebergUtil] (pool-6-thread-1) Table not found: default.dbz_mysql_ducdn_icebergg
2023-09-08 09:50:44,022 INFO [io.deb.emb.EmbeddedEngine] (pool-6-thread-1) Stopping the task and engine
2023-09-08 09:50:44,022 INFO [io.deb.con.com.BaseSourceTask] (pool-6-thread-1) Stopping down connector
2023-09-08 09:50:44,107 INFO [com.git.shy.mys.bin.BinaryLogClient] (blc-keepalive-10.159.19.102:3306) threadExecutor is shut down, terminating keepalive thread
2023-09-08 09:50:44,108 INFO [io.deb.pip.ChangeEventSourceCoordinator] (debezium-mysqlconnector-ducdn_icebergg-change-event-source-coordinator) Finished streaming
2023-09-08 09:50:44,108 INFO [io.deb.con.mys.MySqlStreamingChangeEventSource] (blc-10.159.19.102:3306) Stopped reading binlog after 0 events, last recorded offset: {transaction_id=null, ts_sec=1694166637, file=binlog.000007, pos=0, server_id=1, event=1}
2023-09-08 09:50:44,111 INFO [io.deb.jdb.JdbcConnection] (pool-18-thread-1) Connection gracefully closed
2023-09-08 09:50:44,111 INFO [org.apa.kaf.cli.pro.KafkaProducer] (pool-6-thread-1) [Producer clientId=ducdn_icebergg-schemahistory] Closing the Kafka producer with timeoutMillis = 30000 ms.
2023-09-08 09:50:44,113 INFO [org.apa.kaf.com.met.Metrics] (pool-6-thread-1) Metrics scheduler closed
2023-09-08 09:50:44,114 INFO [org.apa.kaf.com.met.Metrics] (pool-6-thread-1) Closing reporter org.apache.kafka.common.metrics.JmxReporter
2023-09-08 09:50:44,114 INFO [org.apa.kaf.com.met.Metrics] (pool-6-thread-1) Metrics reporters closed
2023-09-08 09:50:44,114 INFO [org.apa.kaf.com.uti.AppInfoParser] (pool-6-thread-1) App info kafka.producer for ducdn_icebergg-schemahistory unregistered
2023-09-08 09:50:44,115 INFO [org.apa.kaf.con.sto.FileOffsetBackingStore] (pool-6-thread-1) Stopped FileOffsetBackingStore
2023-09-08 09:50:44,116 ERROR [io.deb.ser.ConnectorLifecycle] (pool-6-thread-1) Connector completed: success = 'false', message = 'Stopping connector after error in the application's handler method: Complex nested array types are not supported, array[struct], field tableChanges', error = 'java.lang.RuntimeException: Complex nested array types are not supported, array[struct], field tableChanges': java.lang.RuntimeException: Complex nested array types are not supported, array[struct], field tableChanges
at io.debezium.server.iceberg.IcebergChangeEvent$JsonSchema.icebergSchema(IcebergChangeEvent.java:288)
at io.debezium.server.iceberg.IcebergChangeEvent$JsonSchema.valueSchemaFields(IcebergChangeEvent.java:227)
at io.debezium.server.iceberg.IcebergChangeEvent$JsonSchema.icebergSchema(IcebergChangeEvent.java:239)
at io.debezium.server.iceberg.IcebergChangeEvent.icebergSchema(IcebergChangeEvent.java:59)
at io.debezium.server.iceberg.IcebergChangeConsumer.lambda$loadIcebergTable$1(IcebergChangeConsumer.java:192)
at java.base/java.util.Optional.orElseGet(Unknown Source)
at io.debezium.server.iceberg.IcebergChangeConsumer.loadIcebergTable(IcebergChangeConsumer.java:188)
at io.debezium.server.iceberg.IcebergChangeConsumer.handleBatch(IcebergChangeConsumer.java:166)
at io.debezium.embedded.ConvertingEngineBuilder.lambda$notifying$2(ConvertingEngineBuilder.java:101)
at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:912)
at io.debezium.embedded.ConvertingEngineBuilder$2.run(ConvertingEngineBuilder.java:229)
at io.debezium.server.DebeziumServer.lambda$start$1(DebeziumServer.java:170)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
2023-09-08 09:50:44,137 INFO [io.deb.ser.DebeziumServer] (main) Received request to stop the engine 2023-09-08 09:50:44,138 INFO [io.deb.emb.EmbeddedEngine] (main) Stopping the embedded engine 2023-09-08 09:50:44,164 INFO [io.quarkus] (main) debezium-server-iceberg-dist stopped in 0.046s
- I find that the test_user table is ingested but the student table is not yet. In log, 2023-09-08 09:50:44,021 WARN [io.deb.ser.ice.IcebergUtil] (pool-6-thread-1) Table not found: default.dbz_mysql_ducdn_icebergg.
- I think that it is trying to create a table default.dbz_mysql_ducdn_icebergg but it doesn't exist in mysql so it generates an error log `error = 'java.lang.RuntimeException: Complex nested array types are not supported, array[struct], field tableChanges': java.lang.RuntimeException: Complex nested array types are not supported, array[struct], field tableChanges`
- Now, I still don't know how to solve this problem. Can you give me some suggestions??
- Besides, I have an adding question. Does debezium server support Oracle and MySQL?. I don't see any mention in the manual.
@night2201 correct its failing while creating default.dbz_mysql_ducdn_icebergg
table. because this table has complex type field: tableChanges
(nested array type) this type is currently not supported by the consumer.
could you share DDL of this table/field? for the future reference in case someone wants to work on adding support for this type.
yes Oracle and MySQL is supported. pretty much all debezium connectors are supported
@ismailsimsek, my two tables don't have complex type and nested array but I don't understand why it shows the above error log.
@night2201 is this part of the config? event flattening
# do event flattening. unwrap message!
debezium.transforms=unwrap
debezium.transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
debezium.transforms.unwrap.add.fields=op,table,source.ts_ms,db
debezium.transforms.unwrap.delete.handling.mode=rewrite
debezium.transforms.unwrap.drop.tombstones=true
Yes, I added the above configs. Besides, I tried to push data to Kafka and I saw that events are flattern. This is application.properties
# Use iceberg sink
debezium.sink.type=iceberg
# Iceberg sink config
debezium.sink.iceberg.table-prefix=dbz_mysql_
debezium.sink.iceberg.upsert=true
debezium.sink.iceberg.upsert-keep-deletes=false
debezium.sink.iceberg.write.format.default=parquet
debezium.sink.iceberg.catalog-name=iceberg
# hive meatastore catalogs
debezium.sink.iceberg.type=hive
debezium.sink.iceberg.uri=thrift://xx.x.x.x:9083
debezium.sink.iceberg.clients=5
debezium.sink.iceberg.warehouse=s3a://datalake
debezium.sink.iceberg.catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO
debezium.sink.iceberg.engine.hive.enabled=true
debezium.sink.iceberg.iceberg.engine.hive.enabled=true
debezium.sink.iceberg.hive.metastore.table.owner=admin
debezium.sink.iceberg.hive.other.configs=admin
# S3 config
debezium.sink.iceberg.fs.defaultFS=s3a://datalake
debezium.sink.iceberg.com.amazonaws.services.s3.enableV4=true
debezium.sink.iceberg.com.amazonaws.services.s3a.enableV4=true
debezium.sink.iceberg.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
debezium.sink.iceberg.fs.s3a.access.key=minioadmin
debezium.sink.iceberg.fs.s3a.secret.key=minioadmin
debezium.sink.iceberg.fs.s3a.endpoint=http://xx.xx.x.x:9003
debezium.sink.iceberg.fs.s3a.path.style.access=true
debezium.sink.iceberg.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
# enable event schemas - mandate
debezium.format.value.schemas.enable=true
debezium.format.key.schemas.enable=true
debezium.format.value=json
debezium.format.key=json
# mysql
debezium.source.connector.class=io.debezium.connector.mysql.MySqlConnector
debezium.source.offset.storage.file.filename=/tmp/offset1
debezium.source.offset.flush.interval.ms=0
debezium.source.database.hostname=xx.x.x.xx
debezium.source.database.port=3306
debezium.source.database.user=root
debezium.source.database.password=root
#debezium.source.database.dbname=mydb
debezium.source.database.server.name=mysql_cdc
debezium.source.database.include.list=mydb
debezium.source.topic.prefix=ducdn_icebergg
debezium.source.database.server.id=184054
debezium.source.schema.history.internal.kafka.bootstrap.servers=broker:29092
debezium.source.schema.history.internal.kafka.topic=ducdn_schema_changes.mydb
debezium.source.include.schema.changes=true
# do event flattening. unwrap message!
debezium.transforms=unwrap
debezium.transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
debezium.transforms.unwrap.add.fields=op,table,source.ts_ms,db
debezium.transforms.unwrap.delete.handling.mode=rewrite
debezium.transforms.unwrap.drop.tombstones=true
# ############ SET INTERVAL TIME ############
debezium.sink.batch.batch-size-wait=MaxBatchSizeWait
#debezium.sink.batch.metrics.snapshot-mbean=debezium.postgres:type=connector-metrics,context=snapshot,server=testc
#debezium.sink.batch.metrics.streaming-mbean=debezium.postgres:type=connector-metrics,context=streaming,server=testc
debezium.source.connector.class=io.debezium.connector.mysql.MySqlConnector
debezium.source.max.batch.size=10000
debezium.source.max.queue.size=100000
debezium.sink.batch.batch-size-wait.max-wait-ms=120000
debezium.sink.batch.batch-size-wait.wait-interval-ms=10000
# ############ SET LOG LEVELS ############
quarkus.log.level=INFO
quarkus.log.console.json=false
# hadoop, parquet
quarkus.log.category."org.apache.hadoop".level=WARN
quarkus.log.category."org.apache.parquet".level=WARN
# Ignore messages below warning level from Jetty, because it's a bit verbose
quarkus.log.category."org.eclipse.jetty".level=WARN
quarkus.log.category."io.debezium.server.iceberg.batchsizewait".level=DEBUG
found the issue, this one is causing the error debezium.source.include.schema.changes=true
could you please try it with false?
now latest release is supporting complex types, if you use latest version you should not run into this issue.
@ismailsimsek I set debezium.source.include.schema.changes=false
and resolved my problem. But when set debezium.source.include.schema.changes=false
, when the schema of tables in MySQL changes, debezium don't capture this changes but it only captures insert update delete operations
I would like to ask if debezium server iceberg currently supports data partitioning?
Thank you very much for your help, have a nice day!!
1) schema changes are applied using event schema. field additions are added automatically. 2) data partitioning can be added to destination iceberg table manually. debezium server iceberg consumer is not adding any partition.
@ismailsimsek Thanks a lot for your help!!