Closed Sargastico closed 3 years ago
Thanks for reporting @Sargastico and sorry for the delay, my Github notifications didn't come through in my email inbox. Is this still relevant or urgent for you? I will try to look at this over the next week.
@berndruecker It's relevant. I appreciate if you could figure out the problem. That was a while ago and I don't know if something has changed in the meantime.
Hi @Sargastico - sorry for the long delay. I am looking at it now!
You send in an empty message, which means the connector can't derive important information it needs to know to route the message to Zeebe (like message name or correlation key).
What would be the expected behavior you would assume in this case? I think the connector can't handle empty messages, so throwing an exception might be just right.
Note, that you can influence failure situations by configuring what Kafka should do, see https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues/. There you could define to ignore such messages or send it so some dead letter queue. Is this what you were searching for?
The above article is also linked in the readme: https://github.com/camunda-community-hub/kafka-connect-zeebe#configuring-error-handling-of-kafka-connect-eg-logging-or-dead-letter-queues. I would close this issue - feel free to reopen if you have good input on how the connector itself should be improved!
Also always interessted if you can share some info about your use case - maybe in the forum: https://forum.camunda.io/?
Best Bernd
Hi, while testing zeebe connector for kafka, I found something that was causing me some trouble, and maybe is some kind of bug or undesired behavior.
If an empty message is published in the kafka topic, the sink connector will consumes it, and raise the exception:
Running on Kubernetes from GCP (GKE):
Followed by:
The logs are from 'Kafka Connect'. The connector do not recover, it stays as "Degraded" (status from confluent control center) and a fresh deploy is needed