Closed tb-tatewilks closed 1 year ago
Through a lot of trial and error, I came up with these configuration files:
# connect-standalone.properties
name=local-jdbc-sink-connector
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
dialect.name=PostgreSqlDatabaseDialect
connection.url=jdbc:postgresql://postgres:5432/db
connection.password=postgres
connection.user=postgres
auto.create=true
auto.evolve=true
topics=<topics>
tasks.max=1
insert.mode=insert
delete.enabled=false
pk.mode=none
consumer.override.auto.offset.reset=latest
# worker.properties
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=true
key.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
bootstrap.servers=localhost:9092
group.id=jdbc-sink-connector-worker
worker.id=jdbc-sink-worker-1
consumer.override.auto.offset.reset=latest
connector.client.config.override.policy=All
offset.storage.topic=connect-offsets
offset.storage.replication.factor=1
config.storage.topic=connect-configs
config.storage.replication.factor=1
status.storage.topic=connect-status
status.storage.replication.factor=1
There were a few key things that had to change:
insert.mode
to insert
instead of upsert
(avoids using a primary key)pk.mode
to none
(my database has a SERIAL
primary key)pk.fields
delete.enabled=false
value.converter.schemas.enable
to true
value.converter.schema.registry.url
propertyconsumer.override.auto.offset.reset=latest
to both filesconnector.client.config.override.policy=All
to worker.properties
to override any default configurationsAdditionally, my JSON messages to Kafka needed to have a format similar to the following:
{
"schema": { "..." },
"payload": { "..." }
}
If the schema is not included with the payload, the JDBC Sink Connector cannot parse the JSON values and write them to the correct columns in the database.
I've been going in circles with this for a few days now. I'm sending data to Kafka using kafkajs. Each time I produce a message, I assign a UUID to the
message.key
value, and the themessage.value
is set to an event like this and then stringified:I start my connect-standalone JDBC Sink Connector with the following configurations:
When I start the connector with
connect-standalone worker.properties connect-standalone.properties
, it spins up and connects to PostgreSQL with no issue. However, when I produce an event, it fails with this error message:With this stack trace:
I've been going back and forth trying to get it to read my messages, but I'm not sure what is going wrong. One solution just leads to another error, and the solution for the new error leads back to the previous error. What is the correct configuration? How do I resolve this?