itinycheng / flink-connector-clickhouse

Flink SQL connector for ClickHouse. Support ClickHouseCatalog and read/write primary data, maps, arrays to clickhouse.
Apache License 2.0
363 stars 154 forks source link

更新数据时先删除再插入,使用批处理,删除数据后没有插入 #63

Open ambitfly opened 1 year ago

ambitfly commented 1 year ago

-D[10.10.21.21, 8085, 视频源1, rtsp://xxxx@xxxx:554/Streaming/Channels/201, 3, {}, 1, 测试1, 位置123456, 测试用, 标签1, 198dae6f-7604-11ed-99d8-00155d3ba55a, null, null, 395211b3-7603-11ed-8e65-00155d3ba55a, 测试组, 2fe3b8de-7603-11ed-8e65-00155d3ba55a, 型号1, 17cef140-7603-11ed-8e65-00155d3ba55a, 视频源1] +I[10.10.21.21, 8085, 视频源1, rtsp://xxxx123456@xxxx:554/Streaming/Channels/201, 3, {}, 1, 测试1, 位置123456, 测试用, 标签1, 198dae6f-7604-11ed-99d8-00155d3ba55a, 211be11a-7603-11ed-8e65-00155d3ba55a, 海康, 395211b3-7603-11ed-8e65-00155d3ba55a, 测试组, 2fe3b8de-7603-11ed-8e65-00155d3ba55a, 型号1, 17cef140-7603-11ed-8e65-00155d3ba55a, 视频源1]

把'sink.batch-size' 设置成1不会出现这样的问题

itinycheng commented 1 year ago

@ambitfly

问题原因:在同一个Batch中执行顺序为 insert > update > delete,参考代码:ClickHouseUpsertExecutor

默认配置sink.ignore-delete = true是不会去执行delete语句;

ysq5202121 commented 8 months ago

@ambitfly 如果采取自增ID的方式应该是没有问题的,因为同一个ID不会有两条记录