Open rmoff opened 5 years ago
Can anyone verify how this relates to the different database dialects?
Apart from the possibility of using bytea on the database level which is automatically created by:
"key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
I did a few tests after replacing null characters in case STRING:
in
maybeBindPrimitive
in the GenericDatabaseDialect
. The replacement would apply to every String value though – which isn't to bad, as Postgres doesn't like \0 values in text fields anyway.
This works fine in Postgres resulting in unreadable but working record_keys:
Not sure about others, though.
Following on from https://github.com/confluentinc/ksql/issues/2250, there seems to be a problem with the JDBC Sink connector.
The topic's data is written from KSQL, with an Avro value and binary key. For other connectors (e.g. Elasticsearch) it is sufficient to use
"key.converter": "org.apache.kafka.connect.storage.StringConverter"
and the connector is then happy to either ignore the key's value or use it in the sink as-is.With the JDBC Sink connector and
"pk.mode": "none"
, the connector aborts with:Full details follow.
Sample kafka message key:
Sample kafka message value:
Streamed to Elasticsearch:
Works fine. The key is taken as a String as used as the doc id. The point is less that the key is handled, as that it doesn't fubar the connector.
Now in the JDBC sink:
Log:
Note that the table is created in MySQL, but not populated.
Drop the table, and then switch to
"pk.mode": "kafka"
This works.
But, we now have a bunch of extraneous columns (the Kafka message coordinates) in our target table: