Open DemoMoon opened 1 year ago
ClickHouseCommitSink.java:124 try (Connection con = dataSource.getConnection(); PreparedStatement ps = con.prepareStatement(sql)) {
ClickHouseCommitSink.java:206 ps.addBatch();
c.u.a.b.c.k.s.clickhouse.BaseClickhouse : batchInsert is error list size 10000 batchSize 10000 ex Unknown error 1002, server ClickHouseNode [uri=http://xxx:8123/default]@855626085
java.sql.BatchUpdateException: Unknown error 1002, server ClickHouseNode [uri=http://xx:8123/default]@855626085 at com.clickhouse.jdbc.SqlExceptionUtils.batchUpdateError(SqlExceptionUtils.java:107) at com.clickhouse.jdbc.internal.InputBasedPreparedStatement.executeAny(InputBasedPreparedStatement.java:154)
After I tried to put the loading data source in the process of springboot startup, the above exception information disappeared, and now this exception appears again. May I ask what caused it?
c.u.a.b.c.k.s.clickhouse.BaseClickhouse : batchInsert is error list size 10000 batchSize 10000 ex Unknown error 1002, server ClickHouseNode [uri=http://xxx:8123/default]@855626085
java.sql.BatchUpdateException: Unknown error 1002, server ClickHouseNode [uri=http://xx:8123/default]@855626085 at com.clickhouse.jdbc.SqlExceptionUtils.batchUpdateError(SqlExceptionUtils.java:107) at com.clickhouse.jdbc.internal.InputBasedPreparedStatement.executeAny(InputBasedPreparedStatement.java:154)
After I tried to put the loading data source in the process of springboot startup, the above exception information disappeared, and now this exception appears again. May I ask what caused it?
Occasionally this exception will be reported
Hi @DemoMoon, how long it took for the insertion? InterruptedException
was raised when insertion was interrupted by another thread, but it's hard to tell from the stack trace. Unknown error 1002
shows the JDBC driver was not able to extract error from server response, so you'll have to look into system.query_log
table or error log in ClickHouse to investigate.
Describe the bug
java.sql.SQLException: java.lang.InterruptedException
Steps to reproduce
Flink reads 1W pieces of data from Kafka each time, and writes the data to MySQL (1W in batch) and Clickhouse (5000 in batch). Occasionally this exception InterruptedException will be reported.
Expected behaviour
Code example
public abstract class BaseClickhouse extends RichSinkFunction<List> {
private static final Map<String, DataSource> map = new HashMap<>();