timescale / helm-charts

Configuration and Documentation to run TimescaleDB in your Kubernetes cluster
Apache License 2.0
263 stars 223 forks source link

[ISSUE] Default values prevent setup of Azure backups #389

Open TobiasJacob opened 2 years ago

TobiasJacob commented 2 years ago

Describe the bug The default values.yaml file sets empty credentials for AWS S3 buckets, that are passed to pgbackrest via environment variables. Unfortunately, they prevent setting up Azure backups, because (probably through a bug) as soon as pgbackrest finds an empty S3 environment variable, it starts complaining about it being empty, even when repo1-type is set to azure.

To Reproduce Helm install with the following values:

helm install -f ./bugvals.yaml asdf timescaledb/timescaledb-single
clusterName: asdff-timescaledb
prometheus:
  enabled: true
resources:
  requests:
    memory: "1024Mi"
    cpu: "100m"
  limits:
    memory: "1024Mi"
    cpu: "500m"
persistentVolumes:
  data:
    size: "64Gi"
  wal:
    size: "16Gi"
loadBalancer:
  enabled: true
  annotations:
    service.beta.kubernetes.io/azure-dns-label-name: timescaledb
replicaCount: 3
backup:
  enabled: true
  pgBackRest:
    repo1-type: azure
    repo1-azure-account: mccstorageacct
    repo1-azure-container: timescaledb
    repo1-azure-key: ommited
    repo1-cipher-type: none
    repo1-path: /backup

secrets:
  credentials:
    PATRONI_SUPERUSER_PASSWORD: "ommited"

Then pgbackrest does not work even though the repo type is set to azure.

kubectl exec --stdin --tty asdf-timescaledb-0 -- /bin/bash 
Defaulted container "timescaledb" out of: timescaledb, pgbackrest, postgres-exporter, tstune (init)
postgres@asdf-timescaledb-0:~$ pgbackrest info
ERROR: [032]: environment variable 'repo1-s3-key' must have a value

The issue is caused by the empty env variables.

postgres@asdf-timescaledb-0:~$ env | grep PGBACK
PGBACKREST_REPO1_S3_KEY=
PGBACKREST_STANZA=poddb
PGBACKREST_REPO1_S3_ENDPOINT=s3.amazonaws.com
PGBACKREST_REPO1_S3_REGION=
PGBACKREST_CONFIG=/etc/pgbackrest/pgbackrest.conf
PGBACKREST_REPO1_S3_BUCKET=
PGBACKREST_REPO1_S3_KEY_SECRET=
postgres@asdf-timescaledb-0:~$ unset PGBACKREST_REPO1_S3_KEY
postgres@asdf-timescaledb-0:~$ unset PGBACKREST_REPO1_S3_ENDPOINT
postgres@asdf-timescaledb-0:~$ unset PGBACKREST_REPO1_S3_REGION
postgres@asdf-timescaledb-0:~$ unset PGBACKREST_REPO1_S3_BUCKET
postgres@asdf-timescaledb-0:~$ unset PGBACKREST_REPO1_S3_KEY_SECRET
postgres@asdf-timescaledb-0:~$ env | grep PGBACK
PGBACKREST_STANZA=poddb
PGBACKREST_CONFIG=/etc/pgbackrest/pgbackrest.conf
postgres@asdf-timescaledb-0:~$ pgbackrest info
(this command works now)

Adding this

  pgbackrest:
    PGBACKREST_REPO1_S3_REGION: null
    PGBACKREST_REPO1_S3_KEY: null
    PGBACKREST_REPO1_S3_KEY_SECRET: null
    PGBACKREST_REPO1_S3_BUCKET: null
    PGBACKREST_REPO1_S3_ENDPOINT: null

to the values file does not update the secret anymore that induces the env variables into the stateful set when backups are setup after the initial deployment of timescaledb.

Expected behavior Setting the default PGBACKREST_REPO1_S3_* to null, instead of the empty string, or not setting them at all will not create empty environment variables in the container that cause a very misleading error message. It took me a while to figure out that the empty env variables caused pgbackrest to complain about S3 bucket credentials even though the repo1-type was set to azure.

agronholm commented 2 years ago

I have forked the official timescaledb chart here: https://github.com/agronholm/timescaledb-kubernetes See the installation instructions there. This is one of the issues the fork seeks to resolve. Let me know if this resolved the problem for you.