Open ruslanen opened 2 years ago
And from: https://jdbc.postgresql.org/documentation/query/#getting-results-based-on-a-cursor
// make sure autocommit is off
conn.setAutoCommit(false);
Statement st = conn.createStatement();
Looks like for PostgreSQL in order to use batching of results, you need to disable auto commit.
And from: https://jdbc.postgresql.org/documentation/query/#getting-results-based-on-a-cursor
// make sure autocommit is off conn.setAutoCommit(false); Statement st = conn.createStatement();
Looks like for PostgreSQL in order to use batching of results, you need to disable auto commit.
Thank you. It's strange that setAutoCommit can't be configured.
setAutoCommit(false)
fix and rebuild JDBC bridge helped, thank you @UnamedRus
it remains only to understand what bad it can lead to
Hi @ruslanen can you go through your configurations and changes, it's almost a year later and I'm facing the same issues trying to tweak clickhouse, the clickhouse-jdbc-bridge and the jdbc driver params to make it work without any OOM or timeout on a 2M table with large CLOBs
autocommit was PostgreSQL behavior thing, and as i can guess from CLOB, you are using Oracle, so it may have some different issue
Hi, I have a trouble with JDBC Bridge. I am trying to execute create table query in ClickHouse with JDBC based on PostgreSQL datasource:
but I get following error in JDBC Bridge:
I tried to use different values of the setting
max_block_size
, bit it didn't help. Here are my questions:My environment: Ubuntu 18.04, 12Gb RAM, 8 CPU
clickhouse/jdbc-bridge:2.1.0 clickhouse/clickhouse-server:22.7.2.15
JDBC Bridge container with default JVM settings.
PostgreSQL datasource table:
200 000 000
36 GB