Closed rodmccutcheon closed 1 year ago
Hi @rodmccutcheon - have you had any progress on this?
This seems like a backpressure issue to me where messages come in from kafka quickly but then get backed up my message processing and pushing out over MQTT.
That kafka message is the result from the backpressure. The common/easy fix is to increase the max.poll.interval.ms
setting in Kafka.
Is there a way to programmatically resubscribe to mqtt or recover from this timeout? Should we be implementing a retry in the connectClient method?
Unless you are having connection issues, I don't think resubscribe or connection retries are the issue. I would think focusing on message throughput would have a greater ROI.
Let me know what the latest is on this issue.
Since I haven't heard back, I'm going to close out this issue. If anything remains, please feel free to re-open or file another issue. We'd be happy to help out!
We have a small microservice to read from a kafka topic and write to mqtt, using Spring Cloud Stream. It works fine, but after some time we get the following warning and no further messages are published to mqtt:
There doesn't seem to be any warning/error from the mqtt client library.
Is there a way to programmatically resubscribe to mqtt or recover from this timeout?
Could we implement a custom health check for the actuator to include the mqtt subscription, and then the pod would get automatically restarted by k8s? Something like:
Where mqtt is the mqtt component.
EDIT: Here is the consumer code (OutputConfig class):
And Output class:
HiveMqMqttConfig:
config:
Should we be implementing a retry in the connectClient method?