Closed MouceL closed 3 years ago
其实clickhouse_sinker是一个partitin 数据是写到一个ring中,那么提交的offset 可以保证有序。但是如果有多个下游ck表,一个partition 的数据可能包含多个表的数据,那么就会刷到多个buffer 中,等buffer 满了再刷到 ck, 但是这样 每个partition 中的offset就不能保证有序了。这样该怎样有效的提交offset
clickhouse_sinker support only single local table on multiple shards.
Batches are organized into groups. The before
relationship could be impossible if messages of a partition are distributed to multiple batches. So those batches need to be committed after ALL of them have been written to clickhouse. See BatchGroup
for details.
in clickhouse_sinker every kafka partition store in a ring and flush to ck, commit offset after that, how to insert multi data into different ck table according msg's label, and commit offset succeed . can you give me a sugesstion?