I'm reading the data from an index pattern which stores highly-nested JSONs as a flattened text field. Looking at the logs, my Kafka connect keeps throwing this exception:
org.apache.kafka.connect.errors.DataException: {field} is not a valid field name
The connector task does not stop however; it brings the messages that it can process, while skipping the ones it can't.
I'm using JsonConverter currently, and I have tried other converters like JsonSchema and Avro, each running into different problems (which is most likely due to the fact that the schema changes drastically from message to message).
Please note that the field names that are mentioned in the exceptions may not be present in some of documents that are being read from source Elasticsearch. I'm guessing that's what is causing the issue, but I don't know how to make the connector tolerate the non-existence of those fields.
I'm reading the data from an index pattern which stores highly-nested JSONs as a flattened text field. Looking at the logs, my Kafka connect keeps throwing this exception:
org.apache.kafka.connect.errors.DataException: {field} is not a valid field name
The connector task does not stop however; it brings the messages that it can process, while skipping the ones it can't.
I'm using JsonConverter currently, and I have tried other converters like JsonSchema and Avro, each running into different problems (which is most likely due to the fact that the schema changes drastically from message to message).
Please note that the field names that are mentioned in the exceptions may not be present in some of documents that are being read from source Elasticsearch. I'm guessing that's what is causing the issue, but I don't know how to make the connector tolerate the non-existence of those fields.