ClickHouse / clickhouse-jdbc-bridge

A JDBC proxy from ClickHouse to external databases
Apache License 2.0
167 stars 60 forks source link

JDBC bridge + bigquery #154

Open andaag opened 2 years ago

andaag commented 2 years ago

So, I'd love for some official documentation on how to combine this with bigquery's jdbc bridge.

I just tested this... and surprisingly it works. Essentially just dumped all the jar files from the bigquery connector (issue pending to publish it in maven... currently a zip download) into the drivers directory, and set up a source for it.

{
  "$schema": "../../../../../docker/config/datasource.jschema",
  "bigquery": {
    "driverClassName": "com.simba.googlebigquery.jdbc.Driver",
    "jdbcUrl": "jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=nansen-endpoints-dev;OAuthType=0;OAuthPvtKeyPath=/app/bigquery/sa.json;OAuthServiceAcctEmail=SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com",
    "initializationFailTimeout": 0,
    "minimumIdle": 0,
    "maximumPoolSize": 10
  }
}

This for us might be an interesting option. We're looking into some data sync jobs between bigquery and clickhouse, and it would allow us to use dbt materialized views for this for example. Probably wouldn't be optimal performance, but in terms of ease of use for our developers it would be a pretty interesting option.

How .. insane is this? It feels ... wrong. At the same time it did work fairly quickly. Is it worth spending the time to tuning timeouts and such to be able to use this to materialize bigquery data in clickhouse? I'd love to avoid our teams needing spark etc for this.

zhicwu commented 2 years ago

You're not crazy :) As long as the datasource has fully functional JDBC driver, you should be able to access that from ClickHouse, via JDBC bridge.

Probably wouldn't be optimal performance

The overhead of JDBC bridge is around 10% - 20% according to my test earlier, and it does not count the initial query for type inferring. However, I found it's acceptable for processing millions of rows or even near realtime monitoring(grafana + distributed query against various databases). In the case of cross-region data sync, it's actually faster than direct connect for some databases because of lz4 compression.

Is it worth spending the time to tuning timeouts and such to be able to use this to materialize bigquery data in clickhouse?

Unfortunately I'd suggest you limiting the usage for below reasons: 1) JDBC bridge has issues in both design and implementation

andaag commented 2 years ago

Thanks for the detailed writeup!

Can you link me to some more information on clickhouse-data-service ? Sounds very worth checking out.

zhicwu commented 2 years ago

Can you link me to some more information on clickhouse-data-service ?

Sorry it does not exist and I only mentioned it in ClickHouse/clickhouse-jdbc#784 :p An alternative and more generic implementation as far as I know of, is trinodb/trino#1839.

andaag commented 2 years ago

Alright, thanks for your help!

I'll leave it up to you whether or not to leave this ticket as documentation or to close it.