go-graphite / carbon-clickhouse

Graphite metrics receiver with ClickHouse as storage
MIT License
187 stars 47 forks source link

how to use carbon clickhouse with distributed tables ? #119

Open mcarbonneaux opened 1 year ago

mcarbonneaux commented 1 year ago

how to configure carbon-clikhouse with clickhouse distributed tables ?

Felixoid commented 1 year ago

I am not sure what you mean. I've used distributed tables for inserts there

mcarbonneaux commented 1 year ago

you use partitionned tables, not distributed table in the readme.

clickhouse distributed table: https://clickhouse.com/docs/en/sql-reference/distributed-ddl

mcarbonneaux commented 1 year ago

with distributed table you can distribute data again clickhouse node shard.... and scale linearie with the number of node (depend on the eficiency of the distribution key)....

Felixoid commented 1 year ago

You should rather use a single table, not "on cluster" clause.

See https://clickhouse.com/docs/en/engines/table-engines/special/distributed/

mcarbonneaux commented 1 year ago

is the documentation what i'm searching !

the idea is to store not in single node but in cluster with multiple shard.... to scale...

the readme create table instruction, are for single node... or i've missed somephing ...

can be possible to have an example of create table in distributed mode in the readme ?

sheyt0 commented 1 year ago

You should create regular tables on each node in cluster. After that you can write into any of nodes (I use L7 LB).

For reading from all nodes in one request, use Distributed table.

When creating Distributed, you can set sharding_key. That allows you to write "to distribution table" -- this means that all incoming data will be routed by sharding_key.

Note here: When you use rollup-conf = "auto" in graphite-clickhouse, you should set rollup-auto-table = "" pointed to regular table.

Here is examples of configs from my prod:

Tables:

CREATE TABLE IF NOT EXISTS graphite_repl ON CLUSTER datalayer (
    `Path`      String  CODEC(ZSTD(3)),
    `Value`     Float64 CODEC(Gorilla, LZ4),
    `Time`      UInt32  CODEC(DoubleDelta, LZ4),
    `Date`      Date    CODEC(DoubleDelta, LZ4),
    `Timestamp` UInt32  CODEC(DoubleDelta, LZ4)
)
ENGINE = ReplicatedGraphiteMergeTree('/clickhouse/tables/{shard}/graphite_repl', '{replica}', 'graphite_rollup')
PARTITION BY toYYYYMMDD(Date)
ORDER BY (Path, Time)
TTL
    Date + INTERVAL 1 WEEK TO VOLUME 'cold_volume',
    Date + INTERVAL 4 MONTH DELETE
SETTINGS
    index_granularity = 512;

CREATE TABLE IF NOT EXISTS graphite_dist ON CLUSTER datalayer AS graphite_repl
ENGINE = Distributed(datalayer, ..., graphite_repl);

carbon-clickhouse:

...
[upload.graphite]
type = "points"
table = "graphite_repl"
...

graphite-clickhouse:


...
[[data-table]]
 table = "graphite_dist"
 rollup-conf = "auto"
 rollup-auto-table = "graphite_repl"
...
mcarbonneaux commented 1 year ago

i while go to test that !!

mcarbonneaux commented 1 year ago

can be usefull to use chproxy in front to cache request (https://www.chproxy.org/) ?

Civil commented 1 year ago

If you use carbonapi - it also can cache requests. So that depends on what is your use case.

I would overall suggest to start with simple setup and then add extra pieces once you encounter a bottleneck

msaf1980 commented 1 year ago

can be usefull to use chproxy in front to cache request (https://www.chproxy.org/) ?

No. chproxy can't cache requests with external data (used in points table queries).

Graphite-clickhouse can cache finder queries (in render requests). Carbonapi can cache other on the front of API requests (render, find, tags autocomplete).

So, no reason use chproxy for caching. But usefull as bouncer/connection pool limiter.