When the exception is thrown during flush(e.g. network error), Kafka connects rewinds the offsets to last committed and tries to commit current offsets once again. This causes duplicates in the connector since the offsets are now cached in record grouper. Cleaning the record grouper on exception solves the issues.
When the exception is thrown during flush(e.g. network error), Kafka connects rewinds the offsets to last committed and tries to commit current offsets once again. This causes duplicates in the connector since the offsets are now cached in record grouper. Cleaning the record grouper on exception solves the issues.