Closed pavolloffay closed 2 years ago
It will resolve to name of cluster on which you perform SQL script(e.g. to which you connected).
Let's have a look at this example deployment with 2 clusters:
cat <<EOF | kubectl apply -f -
apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
name: jaeger
spec:
configuration:
zookeeper:
nodes:
- host: zookeeper.zoo1ns
clusters:
- name: cluster1
layout:
shardsCount: 2
- name: cluster2
layout:
shardsCount: 2
templates:
podTemplates:
- name: clickhouse-with-empty-dir-volume-template
spec:
containers:
- name: clickhouse-pod
image: yandex/clickhouse-server:20.7
volumeMounts:
- name: clickhouse-storage
mountPath: /var/lib/clickhouse
volumes:
- name: clickhouse-storage
emptyDir:
medium: "" # accepted values: empty str (means node's default medium) or "Memory"
sizeLimit: 1Gi
EOF
Let's SSH into cluster1
and create table`
~/projects/clickhouse/clickhouse-operator/docs/chi-examples(master*) » k get all ploffay@fedora
NAME READY STATUS RESTARTS AGE
pod/chi-jaeger-cluster1-0-0-0 1/1 Running 0 2m50s
pod/chi-jaeger-cluster1-1-0-0 1/1 Running 0 2m8s
pod/chi-jaeger-cluster2-0-0-0 1/1 Running 0 106s
pod/chi-jaeger-cluster2-1-0-0 1/1 Running 0 84s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/chi-jaeger-cluster1-0-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 2m10s
service/chi-jaeger-cluster1-1-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 108s
service/chi-jaeger-cluster2-0-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 91s
service/chi-jaeger-cluster2-1-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 69s
service/clickhouse-jaeger LoadBalancer 10.102.73.91 <pending> 8123:32534/TCP,9000:31866/TCP 2m52s
NAME READY AGE
statefulset.apps/chi-jaeger-cluster1-0-0 1/1 2m50s
statefulset.apps/chi-jaeger-cluster1-1-0 1/1 2m8s
statefulset.apps/chi-jaeger-cluster2-0-0 1/1 106s
statefulset.apps/chi-jaeger-cluster2-1-0 1/1 84s
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
chi-jaeger-cluster1-0-0-0.chi-jaeger-cluster1-0-0.test.svc.cluster.local :) select * from system.clusters
SELECT *
FROM system.clusters
Query id: 59def622-7c1f-4791-851a-2361d8c9d046
┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
│ all-replicated │ 1 │ 1 │ 1 │ chi-jaeger-cluster1-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ all-replicated │ 1 │ 1 │ 2 │ chi-jaeger-cluster1-1-0 │ 172.17.0.8 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ all-replicated │ 1 │ 1 │ 3 │ chi-jaeger-cluster2-0-0 │ 172.17.0.9 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ all-replicated │ 1 │ 1 │ 4 │ chi-jaeger-cluster2-1-0 │ 172.17.0.10 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ all-sharded │ 1 │ 1 │ 1 │ chi-jaeger-cluster1-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ all-sharded │ 2 │ 1 │ 1 │ chi-jaeger-cluster1-1-0 │ 172.17.0.8 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ all-sharded │ 3 │ 1 │ 1 │ chi-jaeger-cluster2-0-0 │ 172.17.0.9 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ all-sharded │ 4 │ 1 │ 1 │ chi-jaeger-cluster2-1-0 │ 172.17.0.10 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ cluster1 │ 1 │ 1 │ 1 │ chi-jaeger-cluster1-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ cluster1 │ 2 │ 1 │ 1 │ chi-jaeger-cluster1-1-0 │ 172.17.0.8 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ cluster2 │ 1 │ 1 │ 1 │ chi-jaeger-cluster2-0-0 │ 172.17.0.9 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ cluster2 │ 2 │ 1 │ 1 │ chi-jaeger-cluster2-1-0 │ 172.17.0.10 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_internal_replication │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_internal_replication │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9440 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 1 │ 0 │ default │ │ 0 │ 0 │ 0 │
└──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘
22 rows in set. Elapsed: 0.004 sec.
chi-jaeger-cluster1-0-0-0.chi-jaeger-cluster1-0-0.test.svc.cluster.local :) CREATE TABLE IF NOT EXISTS jaeger_spans_local ON CLUSTER '{cluster}' (
:-] timestamp DateTime CODEC(Delta, ZSTD(1)),
:-] traceID String CODEC(ZSTD(1)),
:-] model String CODEC(ZSTD(3))
:-] ) ENGINE ReplicatedMergeTree('/clickhouse/tables/{shard}/jaeger_spans', '{replica}')
:-] PARTITION BY toDate(timestamp)
:-] ORDER BY traceID
:-] SETTINGS index_granularity=1024;
CREATE TABLE IF NOT EXISTS jaeger_spans_local ON CLUSTER `{cluster}`
(
`timestamp` DateTime CODEC(Delta, ZSTD(1)),
`traceID` String CODEC(ZSTD(1)),
`model` String CODEC(ZSTD(3))
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/jaeger_spans', '{replica}')
PARTITION BY toDate(timestamp)
ORDER BY traceID
SETTINGS index_granularity = 1024
Query id: d9f918ca-1572-4679-b801-fdeb90dfc3c9
┌─host────────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ chi-jaeger-cluster1-1-0 │ 9000 │ 0 │ │ 1 │ 1 │
└─────────────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
┌─host────────────────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ chi-jaeger-cluster1-0-0 │ 9000 │ 0 │ │ 0 │ 0 │
└─────────────────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
2 rows in set. Elapsed: 0.217 sec.
chi-jaeger-cluster1-0-0-0.chi-jaeger-cluster1-0-0.test.svc.cluster.local :) exit
Bye.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Now lets SSH into the cluster2
and show tables. The table is not there.
~/projects/clickhouse/clickhouse-operator/docs/chi-examples(master*) » kubectl exec -it statefulset.apps/chi-jaeger-cluster2-0-0 -- clickhouse-client ploffay@fedora
ClickHouse client version 21.7.4.18 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.7.4 revision 54449.
chi-jaeger-cluster2-0-0-0.chi-jaeger-cluster2-0-0.test.svc.cluster.local :) select * from system.clusters
SELECT *
FROM system.clusters
Query id: 6efbf7a7-a38b-4e60-b7c9-07b135cb1502
┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
│ all-replicated │ 1 │ 1 │ 1 │ chi-jaeger-cluster1-0-0 │ 172.17.0.4 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ all-replicated │ 1 │ 1 │ 2 │ chi-jaeger-cluster1-1-0 │ 172.17.0.8 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ all-replicated │ 1 │ 1 │ 3 │ chi-jaeger-cluster2-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ all-replicated │ 1 │ 1 │ 4 │ chi-jaeger-cluster2-1-0 │ 172.17.0.10 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ all-sharded │ 1 │ 1 │ 1 │ chi-jaeger-cluster1-0-0 │ 172.17.0.4 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ all-sharded │ 2 │ 1 │ 1 │ chi-jaeger-cluster1-1-0 │ 172.17.0.8 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ all-sharded │ 3 │ 1 │ 1 │ chi-jaeger-cluster2-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ all-sharded │ 4 │ 1 │ 1 │ chi-jaeger-cluster2-1-0 │ 172.17.0.10 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ cluster1 │ 1 │ 1 │ 1 │ chi-jaeger-cluster1-0-0 │ 172.17.0.4 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ cluster1 │ 2 │ 1 │ 1 │ chi-jaeger-cluster1-1-0 │ 172.17.0.8 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ cluster2 │ 1 │ 1 │ 1 │ chi-jaeger-cluster2-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ cluster2 │ 2 │ 1 │ 1 │ chi-jaeger-cluster2-1-0 │ 172.17.0.10 │ 9000 │ 0 │ default │ │ 3 │ 0 │ 97 │
│ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_internal_replication │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_internal_replication │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9440 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 1 │ 0 │ default │ │ 0 │ 0 │ 0 │
└──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘
22 rows in set. Elapsed: 0.005 sec.
chi-jaeger-cluster2-0-0-0.chi-jaeger-cluster2-0-0.test.svc.cluster.local :) show tables
SHOW TABLES
Query id: 2c1bde05-feb8-430b-b9eb-19be360c5f26
Ok.
0 rows in set. Elapsed: 0.004 sec.
chi-jaeger-cluster2-0-0-0.chi-jaeger-cluster2-0-0.test.svc.cluster.local :)
It will resolve to name of cluster on which you perform SQL script(e.g. to which you connected).
@EinKrebs The example above shows that the table wasn't created on all clusters (cluster1
and cluster2
). What am I missing?
@pavolloffay Why do you need multiple clusters? You can just create more shards on the first cluster. As far as I know, ClickHouse cannot work with multiple clusters at the same time.
@pavolloffay Why do you need multiple clusters?
I don't need multiple clusters, this was just an experiment to test the {cluster}
@pavolloffay Does your question still remain? I'm asking because from your test above it's obvious that tables are created on the only cluster.
yes my question remains. I thought that {cluster}
would make sure that the tables will be created on all clusters, however it is not the case.
No, it would make sure taht tables will be created not on all clusters, but on any cluster, regardless of its name.
Do you have a pointer to docs? :)
Thanks, PS I don't speak Russian 🙈
There is language setting in top right.
Is the isssue resolved, so we can close it?
yes
Our replication and sharding guide uses https://github.com/pavolloffay/jaeger-clickhouse/blob/main/guide-sharding-and-replication.md#replication '{cluster}' substitution when creating distributed table e.g.
I am not sure if I understand what it exactly does. Could somebody explain it? @EinKrebs @chhetripradeep
Let's say my CH deployment defines two clusters
So if the create command is executed would it crate tables on all clusters?