Altinity / clickhouse-backup

Tool for easy backup and restore for ClickHouse® using object storage for backup files.
https://altinity.com
Other
1.29k stars 227 forks source link

Authentication Failed When Configuring Automatic Backup #927

Closed sanjeev3d closed 6 months ago

sanjeev3d commented 6 months ago

Description: I was trying to configure automatic backup using the instructions from this page.

Error Message: Code: 516. DB::Exception: Received from chi-clickhouse-poc-cliffcluster-0-0.click-zoo.svc.cluster.local:9000. DB::Exception: backup: Authentication failed: password is incorrect, or there is no user with such name..

Slach commented 6 months ago

which password and use you are actually use in your script?

run

kubectl exec -n click-zoo chi-clickhouse-poc-cliffcluster-0-0-0 --container clickhouse-backup -- clickhouse-backup print-config

and check it

sanjeev3d commented 6 months ago

@Slach I have checked config using provided adhoc command, password and user is same which I'm using but still getting same error.

Adhoc command output

clickhouse:
    username: backup
    password: backup_password
    host: localhost
    port: 9000
    disk_mapping: {}
    skip_tables:
        - system.*
        - INFORMATION_SCHEMA.*
        - information_schema.*
        - _temporary_and_external_tables.*

cronjob config output

apiVersion: v1
data:
  BACKUP_PASSWORD: backup_password
  BACKUP_USER: backup
  CLICKHOUSE_PORT: "9000"
Slach commented 6 months ago
apiVersion: v1
data:
 BACKUP_PASSWORD: backup_password
 BACKUP_USER: backup
 CLICKHOUSE_PORT: "9000"

This is not CronJob manifest, data look like part of ConfigMap we don't use configmaps in Examples.md

How exactly looks your kind: ClickHouseInstallation and kind: CronJob?

you could look to https://gist.github.com/Slach/d933ecebf93edbbaed7ce0a2deeaabb7 and compare with your manifests

moreover could you share?

kubectl exec -n click-zoo chi-clickhouse-poc-cliffcluster-0-0-0 --container clickhouse -- grep -C 10 backup -e /etc/clickhouse-server/
sanjeev3d commented 6 months ago

@Slach, previously, I was using a simple batch job, which is why "data" is reflected. However, I have checked using the method you shared via the URL.

Sharing some part of cron job which I'm using

apiVersion: batch/v1
kind: CronJob
metadata:
  name: clickhouse-backup-cron
spec:
  # every day at 00:00
  schedule: "*/1 * * * *"
  concurrencyPolicy: "Forbid"
  jobTemplate:
    spec:
      backoffLimit: 1
      completions: 1
      parallelism: 1
      template:
        metadata:
          labels:
            app: clickhouse-backup-cron
        spec:
          restartPolicy: Never
          containers:
            - name: run-backup-cron
              image: clickhouse-client:21.3.20
              imagePullPolicy: IfNotPresent
              env:
                - name: CLICKHOUSE_SERVICES
                  value: chi-clickhouse-poc-cliffcluster-0-0,chi-clickhouse-poc-cliffcluster-1-0
                - name: CLICKHOUSE_PORT
                  value: "9000"
                - name: BACKUP_USER
                  value: backup
                - name: BACKUP_PASSWORD
                  value: "backup_password"
                # change to 1, if you want to make full backup only in $FULL_BACKUP_WEEKDAY (1 - Mon, 7 - Sun)
                - name: MAKE_INCREMENT_BACKUP
                  value: "1"
                - name: FULL_BACKUP_WEEKDAY
                  value: "1"
              command:
                - bash

Command output

$ kubectl exec -n click-zoo chi-clickhouse-poc-cliffcluster-0-0-0 --container clickhouse -- grep -C 10 backup -e /etc/clickhouse-server/
grep: backup: No such file or directory
command terminated with exit code 2
Slach commented 6 months ago

oops sorry -r instead of -e

kubectl exec -n click-zoo chi-clickhouse-poc-cliffcluster-0-0-0 --container clickhouse -- grep backup -C 10 -r /etc/clickhouse-server/

Slach commented 6 months ago

I was using a simple batch job, which is why "data" is reflected

i don't understand what means "simple batch job" in kubernetes terms

sanjeev3d commented 6 months ago

@Slach I was using this earlier but even using cron job getting same Authentication error

This was I'm earlier using but you can ignore this, I 'm using same which you ask to used.

apiVersion: batch/v1
kind: Job
metadata:
  name: clickhouse-backup-job
spec:
  backoffLimit: 1
  completions: 1
  parallelism: 1
  template:
    metadata:
      labels:
        app: clickhouse-backup-job
    spec:
      restartPolicy: Never
      containers:
        - name: run-backup-job
          image: clickhouse-client:21.3.20
          imagePullPolicy: IfNotPresent
          envFrom:
          - configMapRef:
              name: clickhouse-cron-config
          command:
            - bash

Command output

$ kubectl exec -n click-zoo chi-clickhouse-poc-cliffcluster-0-0-0 --container clickhouse -- grep backup -C 10 -r /etc/clickhouse-server/
command terminated with exit code 1
Slach commented 6 months ago

could you share

kubectl get chi -n click-zoo clickhouse-poc -o yaml
sanjeev3d commented 6 months ago

@Slach Sharing output of above command

$ kubectl get chi -n click-zoo clickhouse-poc -o yaml

apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallation
metadata:
  annotations:
  creationTimestamp: "2024-05-22T07:16:14Z"
  finalizers:
  - finalizer.clickhouseinstallation.altinity.com
  name: clickhouse-poc
  namespace: click-zoo
spec:
  configuration:
    clusters:
    - layout:
        replicasCount: 2
        shardsCount: 2
      name: cliffcluster
    settings:
      disable_internal_dns_cache: 1
      remote_servers/all-replicated/secret: default
      remote_servers/all-sharded/secret: default
      remote_servers/cliffcluster/secret: default
    users:
      admin/access_management: 1
      admin/networks/ip:
      - 0.0.0.0/0
      - ::/0
      admin/password: click@007
    zookeeper:
      nodes:
      - host: zookeeper-0.zookeepers.click-zoo
        port: 2181
  defaults:
    templates:
      dataVolumeClaimTemplate: clickhouse-storage-template
      podTemplate: pod-template-with-volumes-shard
      serviceTemplate: chi-service-template
  templates:
    podTemplates:
    - name: pod-template-with-volumes-shard
      spec:
        containers:
        - image: clickhouse-server:23.8
          name: clickhouse
          volumeMounts:
          - mountPath: /var/lib/clickhouse
            name: clickhouse-storage-template-1
        - command:
          - bash
          - -xc
          - /bin/clickhouse-backup server
          envFrom:
          - configMapRef:
              name: clickhouse-backup-config
          image: clickhouse-backup:master
          imagePullPolicy: Always
          name: clickhouse-backup
          ports:
          - containerPort: 7171
            name: backup-rest
    serviceTemplates:
    - generateName: clickhouse-{chi}
      name: chi-service-template
      spec:
        ports:
        - name: http
          port: 8123
          targetPort: 8123
        - name: tcp
          port: 9000
          targetPort: 9000
        - name: interserver
          port: 9009
          targetPort: 9009
        type: NodePort
    volumeClaimTemplates:
    - name: clickhouse-storage-template-1
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: robin-encrypt
Slach commented 6 months ago

I don't see backup in users section in your CHI manifest

add

spec:
 configuration:
   users:
     backup/networks/ip:
     - 0.0.0.0/0
     backup/password: backup_password
sanjeev3d commented 6 months ago

Thanks, it working now, but getting error " DB::Exception: Table system.backup_list does not exist."

Slach commented 6 months ago

check kubectl logs -n click-zoo pod/chi-clickhouse-poc-cliffcluster-0-0-0 --container clickhouse-backup -- clickhouse-backup print-config | grep integration

looks like you didn't follow instructions look details in https://github.com/Altinity/clickhouse-backup/blob/master/Examples.md?plain=1#L199-L200