Open sarthaksingh-tomar opened 1 week ago
This could be related to acquiring table locks, u can try this debezium configuration
snapshot.locking.mode= none
This could be related to acquiring table locks, u can try this debezium configuration
snapshot.locking.mode= none
@subkanthi Already using that to avoid locking but still connector is failing. connector parameter configs
database.allowPublicKeyRetrieval: "true"
snapshot.mode: "when_needed" snapshot.locking.mode: "none"
offset.flush.interval.ms: 5000
connector.class: "io.debezium.connector.mysql.MySqlConnector"
offset.storage: "io.debezium.storage.jdbc.offset.JdbcOffsetBackingStore"
offset.storage.jdbc.offset.table.name: "altinity_sink_connector.replica_source_info"
offset.storage.jdbc.url: "jdbc:clickhouse://#########:8123/altinity_sink_connector"
offset.storage.jdbc.user: "#####"
offset.storage.jdbc.password: "#######"
offset.storage.jdbc.offset.table.ddl: "CREATE TABLE if not exists altinity_sink_connector.replica_source_info
(
id
String,
offset_key
String,
offset_val
String,
record_insert_ts
DateTime,
record_insert_seq
UInt64,
_version
UInt64 MATERIALIZED toUnixTimestamp64Nano(now64(9))
)
ENGINE = ReplacingMergeTree(_version) ORDER BY offset_key SETTINGS index_granularity = 8192"
offset.storage.jdbc.offset.table.delete: "select * from altinity_sink_connector.replica_source_info"
offset.storage.jdbc.offset.table.select: "SELECT id, offset_key, offset_val FROM altinity_sink_connector.replica_source_info FINAL ORDER BY record_insert_ts, record_insert_seq"
schema.history.internal: "io.debezium.storage.jdbc.history.JdbcSchemaHistory"
schema.history.internal.jdbc.schema.history.table.name: "altinity_sink_connector.replicate_schema_history" schema.history.internal.schema.history.table.name: "altinity_sink_connector.replicate_schema_history"
schema.history.internal.jdbc.url: "jdbc:clickhouse://########:8123/altinity_sink_connector"
schema.history.internal.jdbc.user: "#####"
schema.history.internal.jdbc.password: "########"
schema.history.internal.jdbc.schema.history.table.ddl: "CREATE TABLE if not exists altinity_sink_connector.replicate_schema_history
(id
VARCHAR(36) NOT NULL, history_data
VARCHAR(65000), history_data_seq
INTEGER, record_insert_ts
TIMESTAMP NOT NULL, record_insert_seq
INTEGER NOT NULL) ENGINE=ReplacingMergeTree(record_insert_seq) order by id"
enable.snapshot.ddl: "true"
persist.raw.bytes: "false"
auto.create.tables: "true"
database.connectionTimeZone: "UTC"
restart.event.loop: "false"
restart.event.loop.timeout.period.secs: "3000"
buffer.max.records: "10000"
Hello,
I am trying clickhouse-sink-connector-lightweight to replicate data from Mariadb to clickhouse but it is failing with this exception during snapshot.
using default config with below mariadb configs.
https://github.com/Altinity/clickhouse-sink-connector/blob/develop/sink-connector-lightweight/docker/config.yml