Open RishabhAcodes opened 2 months ago
thank you for reporting this!
I feel this might be the problem as default is taken as 3 which eventually fails while creating a dead letter queue.
"errors.deadletterqueue.topic.replication.factor": 1
Haven't tested this though but this is the problem then it should be specified in the docs as well.
indeed, that could very well be the cause. the test we have for this does set the replication factor to 1
. since it only starts a single broker.
while we cannot assume a topology of your Kafka deployments I agree the docs could warn about this. so a good suggestion, thanks!
I tested this by sending strings into a UUID
field and the Kafka Questdb connect still throws an uncaught exception until offset manually reset.
Please help us sort this out as this block the entire data processing due to this issue.
I will have a deeper look later this week
hello @RishabhAcodes: I can confirm it's not working in the latest connector release. I have a WIP with a fix. Would you be able to test an unreleased snapshot version of the connector?
Sure, would love to @jerrinot
Suggestions on how can I test this unreleased snapshot?
@RishabhAcodes: Excellent, that would be quite helpful! There are 2 options to test it. If you are a Java developer, building the connector yourself from source code might be quite simple:
git clone https://github.com/questdb/kafka-questdb-connector.git
git checkout jh_dlq_slow_mode
mvn clean install -DskipTests
kafka-questdb-connector-0.14-SNAPSHOT-bin.zip
inside connector/target
- that's the connector zip. If the steps above are too complex or you do not have Maven installed then you can just grap the zip file from: https://drive.google.com/file/d/1GvHvDxHhy0OsOSin37hYqL8JxXx-9Bga/view?usp=sharing
You install the connector from the zip as if it was a regular release. Make sure to delete the old version.
@RishabhAcodes I realized the snapshot version only cover some cases, but not all types issues are handled. I'll update it soon.
Sure. Keep me in loop :)
@RishabhAcodes you can try this snapshot: https://drive.google.com/file/d/1Bsd57jsGYN5a5O24SG36EFQEC_euaIRR/view?usp=sharing
Caveats:
edit: link updated
there is still an issue when a message contains an unsupported field (e.g. array). has a null timestamp, etc. In general: When the connector cannot send it to QuestDB server at all. Ideally, such message should go to DQL too (when configured), but that's currently not the case. The fix is in-progress, but depends on https://github.com/questdb/questdb/pull/4936
We're registering Questdb with the Kafka connect via curl using the following:
Now, our Kafka broker has a main topic, and a DLQ topic created already.
Expected: When the producer sends an incorrectly formatted message probably incorrect timestamp in our case, Kafka connect should reject, log the error, send the message to the DLQ and continue processing as normal.
Actual When the producer sends an incorrect timestamp format, Kafka connect logs an
Uncaught exception:...
(QuestDB connector error here), disconnects node and shuts down altogether.Current Fix Reset the offset manually from Kafka broker and restart the Kafka connect to restart the QuestDB connect service which then starts processing as normal.