Closed cyberjar09 closed 1 year ago
The primary key of the database should map to the key of the kafka record.
hey thanks for the response. What im trying to achieve tho is to include the PK of the db row in the record value as well. It only appears in the key and the key is not written to the JDBC sink
include the PK of the db row in the record value as well.
Use a transform.
the key is not written to the JDBC sink
Are you sure about that? It's the primary key when you use 'pk.mode' = 'record_key'
. Refer https://rmoff.net/2021/03/12/kafka-connect-jdbc-sink-deep-dive-working-with-primary-keys/
im using pk.mode=kafka
Then change it? You asked me if it was possible to put the key in the database, and it is.
hey @OneCricketeer thanks for the response. My requirements unfortunately need me to keep track of the kafka offset etc so using kafka
is what I must do. No matter, with the help of Robin Moffatt I was directed to
so this should solve my problem
keep track of the kafka offset etc
You can use InsertField transform to add that offset, partition, etc. That still leaves possibility of using 'pk.mode' = 'record_key'
hi im trying to find either native support or support via transformers to write the kafka record key in the sink payload
I see that s3 has support for this (link) but I don't see any such option in the JDBC sink connector.
appreciate the guidance, im new to this ecosystem so pls excuse the question if im overlooking something obvious =)