confluentinc / kafka-connect-jdbc

Kafka Connect connector for JDBC-compatible databases
Other
1.01k stars 954 forks source link

[CCDB-5238] Reinitialize writer object during retry to refresh cache to clear invalid state #1276

Closed Tanish0019 closed 1 year ago

Tanish0019 commented 1 year ago

Problem

In confluent cloud if someone manually deletes the DB table while the JDBC connector is running, the connector will start failing even if auto.create config is set as true. Ideally when auto create is true table should be created again. This happens because the connector maintains a cache for for the tables and if a table is deleted that cache becomes invalid and no new table is created till process restarts.

Solution

Reinitialise the writer object when the connector goes into final retry state. This resets the cache and the table is created again.

Does this solution apply anywhere else?
If yes, where?

Test Strategy

Added integration test for the scenario.

Testing done:

Release Plan

Bug fix for mainly confluent cloud releasing to latest version of connector.

CLAassistant commented 1 year ago

CLA assistant check
All committers have signed the CLA.

ypmahajan commented 1 year ago

Thanks @Tanish0019 for the quick fix. Did you get a chance to check what happens if there are multiple tables that the connector is pushing data to, and one of them is deleted? We re-initialise the writer which will clear internal cache for all the tables. In that case, create query will be executed for existing tables too. Will that exception be handled appropriately?

@mukkachaitanya can you also take a look?

Tanish0019 commented 1 year ago

@ypmahajan Yes I have tested that scenario. It is handled properly as DB driver checks if table already exists.