I have to created sink connector with configuration like this
{
"connector.class": "io.aiven.kafka.connect.http.HttpSinkConnector",
"http.authorization.type": "none",
"tasks.max": "3",
"name": "{{connector-name}}",
"http.url": "{{service-endpoint-url}}",
"auto.commit.interval.ms": "15000",
"heartbeat.interval.ms": "15000",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"retry.backoff.ms": "30000",
"http.ssl.trust.all.certs": "true",
"topics.regex": "{{topic-name}}",
"max.poll.interval.ms": "3600000"
}
I reviewed the service response log and observed that the duration ranged between 5 and 10 seconds, then I counted the number of requests and I found the number of requests was more than the number of messages. Assume that if I have 500 messages in Kafka, the number of requests is more than 500 requests and I find some messages will be sent duplicate.
I have to created sink connector with configuration like this { "connector.class": "io.aiven.kafka.connect.http.HttpSinkConnector", "http.authorization.type": "none", "tasks.max": "3", "name": "{{connector-name}}", "http.url": "{{service-endpoint-url}}", "auto.commit.interval.ms": "15000", "heartbeat.interval.ms": "15000", "value.converter": "org.apache.kafka.connect.storage.StringConverter", "retry.backoff.ms": "30000", "http.ssl.trust.all.certs": "true", "topics.regex": "{{topic-name}}", "max.poll.interval.ms": "3600000" }
I reviewed the service response log and observed that the duration ranged between 5 and 10 seconds, then I counted the number of requests and I found the number of requests was more than the number of messages. Assume that if I have 500 messages in Kafka, the number of requests is more than 500 requests and I find some messages will be sent duplicate.
Which config will fix this issue ?
Thank you