Camel type converter gives the following exception when it encounters null in the payload.
I am using the camel-file-sink-connector to dump the kafka message.
The kafka message comes from the postgresql database using Debezium source connector.
A database DELETE operation causes Debezium to generate two Kafka records:
A record that contains "op": "d", the before row data, and some other fields.
A tombstone record that has the same key as the deleted row and a value of null. This record is a marker for Apache Kafka. It indicates that log compaction can remove all records that have this key.
This null value needs to be handled by the converter.
connect_1 | Caused by: org.apache.kafka.connect.errors.DataException: CamelTypeConverter was not able to converter value null to target type of String
connect_1 | at org.apache.camel.kafkaconnector.transforms.CamelTypeConverterTransform.convertValueWithCamelTypeConverter(CamelTypeConverterTransform.java:57)
connect_1 | at org.apache.camel.kafkaconnector.transforms.CamelTypeConverterTransform.apply(CamelTypeConverterTransform.java:47)
connect_1 | at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
For this kind of stuff, you could write your own converter, the camel converter for File is not developed for dealing with Debezium. Camel-Kafka-connector is built upon Camel, but Camel has its own mechanism.
Camel type converter gives the following exception when it encounters null in the payload. I am using the camel-file-sink-connector to dump the kafka message. The kafka message comes from the postgresql database using Debezium source connector.
A database DELETE operation causes Debezium to generate two Kafka records:
A tombstone record that has the same key as the deleted row and a value of null. This record is a marker for Apache Kafka. It indicates that log compaction can remove all records that have this key.
This null value needs to be handled by the converter.