ClickHouse / clickhouse-java

Java client and JDBC driver for ClickHouse
https://clickhouse.com
Apache License 2.0
1.39k stars 513 forks source link

jdbc did not return successfully #413

Open aa8808021 opened 4 years ago

aa8808021 commented 4 years ago

jdbc version: 0.2 jdbc: jdbc: clickhouse: //xx.xx.xx.xx: 8123 / lncsbsm? socket_timeout = 36000000

insert statement: insert into table (clonum ...) select clonum ... from xxxx Call method: jdbcTemplate.update (insert) Actual execution time: 2 hours Error: did not return after execution, insert statement status remains until socket_timeout returns timeout

aa8808021 commented 4 years ago

Try ClickHouseProperties.async?

How to use it? thanks

konstantinosss commented 1 year ago

It seems a similar code from the extended API example also hangs on a large amount of data if async=false is specified. As there is no thread for reading from BlockingPipedOutputStream:

java.base@17.0.3/java.util.concurrent.ArrayBlockingQueue.offer(ArrayBlockingQueue.java:396)
app//com.clickhouse.client.stream.BlockingPipedOutputStream.updateBuffer(BlockingPipedOutputStream.java:65)
app//com.clickhouse.client.stream.BlockingPipedOutputStream.updateBuffer(BlockingPipedOutputStream.java:55)
app//com.clickhouse.client.stream.BlockingPipedOutputStream.writeBytes(BlockingPipedOutputStream.java:180)
app//com.clickhouse.client.ClickHouseOutputStream.write(ClickHouseOutputStream.java:231)
app//com.clickhouse.client.data.BinaryStreamUtils.writeInt32(BinaryStreamUtils.java:779)

This is the example:

Statement sth = connection.createStatement();
sth.unwrap(ClickHouseRequest.class).write().table("test.writer")
    .format(ClickHouseFormat.RowBinary).data(out -> {
    for (int i = 0; i < 10; i++) {
        // write data into the piped stream in current thread
        BinaryStreamUtils.writeInt32(out, i);
        BinaryStreamUtils.writeString(out, "Name " + i);
    }
}).sendAndWait(); // query happens in a separate thread
zhicwu commented 1 year ago

Hi @konstantinosss, yes, ClickHouseWriter should be enhanced. As of now, the workaround is to add request.option(ClickHouseClientOption.MAX_QUEUED_BUFFERS, 0) to use unbounded queue instead if you have more than 4096 * 512 bytes to write.

Update: sorry, I was sleepy last night :p Using unbounded queue is only required when async=false. By default JDBC driver runs in synchronous mode, so you can use stmt.unwrap(ClickHouseRequest.class).copy().option(ClickHouseClientOption.ASYNC, true) instead.