Hello,
I just wanted to ask that what's the best practice to run sink connector?
Like separate kafka sink connector instance for each table with single topic or Single Jdbc connector for all tables with task.max=number of tables and all topics inside that configuration like topic=table1,table2,table3?
Note: We also have to do some transformation on date/timestamp/double columns for each table using timestampconverter/doubleconv. Plus our use case is only insertion from mysql to Greenplum. No deletes/updates/DDL operations.
Hello, I just wanted to ask that what's the best practice to run sink connector? Like separate kafka sink connector instance for each table with single topic or Single Jdbc connector for all tables with task.max=number of tables and all topics inside that configuration like topic=table1,table2,table3?
Note: We also have to do some transformation on date/timestamp/double columns for each table using timestampconverter/doubleconv. Plus our use case is only insertion from mysql to Greenplum. No deletes/updates/DDL operations.
Thanks