Altinity / clickhouse-operator

Altinity Kubernetes Operator for ClickHouse creates, configures and manages ClickHouse clusters running on Kubernetes
https://altinity.com
Apache License 2.0
1.85k stars 453 forks source link

Keeper PVC is not taking input from VolumeClaimTemplates #1362

Closed zheyu001 closed 10 hours ago

zheyu001 commented 6 months ago

We are trying to migrate our own keeper config to CHK, but it seems the PVC template is not taking into account. By quickly scan the code, it seems when creating CHK, it only cares about how many keeper volumes are declared instead of the content of the pvc? At least in our case, k8s complains about "both-paths" not found.

https://github.com/Altinity/clickhouse-operator/blob/0.23.3/pkg/model/chk/creator.go#L97-L118

dxygit1 commented 5 months ago

I also encountered the same problem

Hope to specify PVC binding to the corresponding directory

alex-zaitsev commented 5 months ago

Will be fixed in 0.23.4. There is a workaround in the current version, but better not to use it, it will be deprecated.

dxygit1 commented 5 months ago

Please let me know after you've fixed it. Thank you very much

dxygit1 commented 5 months ago

Approximately when can this be fix

dustinmoris commented 5 months ago

I run into the same issue. What do we need to do in the meantime to get things working? I just need to get CHK working with my CHI and I don't want to wait a few days or longer until a fix gets released. Ideally I get this working today with whatever workarounds are needed for now.

genestack-okunitsyn commented 5 months ago

+1 to the issue

dustinmoris commented 5 months ago

I got CHK working with this manifest:

apiVersion: "clickhouse-keeper.altinity.com/v1"
kind: "ClickHouseKeeperInstallation"
metadata:
  name: clickhouse-keeper
  namespace: plausible
spec:
  configuration:
    clusters:
      - name: prod
        layout:
          replicasCount: 3
    settings:
      logger/level: "trace"
      logger/console: "true"
      listen_host: "0.0.0.0"
      keeper_server/storage_path: /var/lib/clickhouse-keeper
      keeper_server/tcp_port: "2181"
      keeper_server/four_letter_word_white_list: "*"
      keeper_server/coordination_settings/raft_logs_level: "information"
      keeper_server/raft_configuration/server/port: "9444"
      prometheus/endpoint: "/metrics"
      prometheus/port: "7000"
      prometheus/metrics: "true"
      prometheus/events: "true"
      prometheus/asynchronous_metrics: "true"
      prometheus/status_info: "false"
  templates:
    podTemplates:
      - name: default
        spec:
          containers:
            - name: clickhouse-keeper
              image: "clickhouse/clickhouse-keeper:24-alpine"
              imagePullPolicy: IfNotPresent
              resources:
                requests:
                  memory: "256M"
                  cpu: "0.5"
                limits:
                  memory: "2Gi"
                  cpu: "2"
    volumeClaimTemplates:
      - name: both-paths
        spec:
          storageClassName: standard-rwo
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 10Gi
jun0tpyrc commented 1 month ago

https://github.com/Altinity/clickhouse-operator/issues/1362#issuecomment-2021453779 thanks ,this example is good

For original issue - probably we should have working & better example in https://github.com/Altinity/clickhouse-operator/tree/master/docs/chk-examples

Slach commented 10 hours ago

0.24.0 works properly with volumeClaimTemplates check https://github.com/Altinity/clickhouse-operator/tree/0.24.0/docs/chk-examples/

migration from old schema 1 statesfulset with multiple replicas per one kind: ClickHouseKeeperInstallation to separate statefulset with one replica for each replica from kind: ClickHouseKeeperInstallation still in development, so wait when 0.24.0 will complete