timescale / helm-charts

Configuration and Documentation to run TimescaleDB in your Kubernetes cluster
Apache License 2.0
261 stars 223 forks source link

resource requests and limits not applied correctly #539

Open shooit opened 1 year ago

shooit commented 1 year ago

What happened? Resource requests and limits are always the same value.

  1. If resources.requests IS set and resources.limits IS NOT set, then the pod has its limits set to the value requests
  2. If resources.requests IS NOT set and resources.limits IS set, the pod has its requests set to the value of limits
  3. If resources.requests IS set and resources.limits IS set, the pod has its limits set to the value of requests

Did you expect to see something different? (3) is the most troublesome for us, as we would expect the pod to have both requests and limits determined by the values.

(2) we would expect that the value of requests would be the default from the cluster.

How to reproduce it (as minimally and precisely as possible): deploy (1) (2) and (3) with the following settings values.yaml and inspect the resulting pods

resources:
  requests:
    cpu: 1000m
    memory: 2Gi
  limits:
    cpu: 1500m
    memory: 3Gi

Environment

Client Version: v1.26.0 Kustomize Version: v4.5.7 Server Version: v1.24.5-gke.600

GKE autopilot cluster

cdktf v14.3 helm-provider v4.0.0

paulfantom commented 1 year ago

Timescaledb-single helm chart doesn't do any manipulation to resources value and passes it directly to the StatefulSet (as seen here). This helm chart is also not setting any default values which could conflict or potentially cause issue described here.

Either way I run helm template with the suggested values.yaml. Below is the result of this test:

console output ```console $ cat values-override.yaml resources: requests: cpu: 1000m memory: 2Gi limits: cpu: 1500m memory: 3Gi $ helm template test . -f values.yaml -f values-override.yaml --- # Source: timescaledb-single/templates/serviceaccount-timescaledb.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: ServiceAccount metadata: name: test namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: rbac --- # Source: timescaledb-single/templates/configmap-patroni.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license.--- apiVersion: v1 kind: ConfigMap metadata: name: test-patroni namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: patroni data: patroni.yaml: | bootstrap: dcs: loop_wait: 10 maximum_lag_on_failover: 33554432 postgresql: parameters: archive_command: /etc/timescaledb/scripts/pgbackrest_archive.sh %p archive_mode: "on" archive_timeout: 1800s autovacuum_analyze_scale_factor: 0.02 autovacuum_max_workers: 10 autovacuum_naptime: 5s autovacuum_vacuum_cost_limit: 500 autovacuum_vacuum_scale_factor: 0.05 hot_standby: "on" log_autovacuum_min_duration: 1min log_checkpoints: "on" log_connections: "on" log_disconnections: "on" log_line_prefix: '%t [%p]: [%c-%l] %u@%d,app=%a [%e] ' log_lock_waits: "on" log_min_duration_statement: 1s log_statement: ddl max_connections: 100 max_prepared_transactions: 150 shared_preload_libraries: timescaledb,pg_stat_statements ssl: "on" ssl_cert_file: /etc/certificate/tls.crt ssl_key_file: /etc/certificate/tls.key tcp_keepalives_idle: 900 tcp_keepalives_interval: 100 temp_file_limit: 1GB timescaledb.passfile: ../.pgpass unix_socket_directories: /var/run/postgresql unix_socket_permissions: "0750" wal_level: hot_standby wal_log_hints: "on" use_pg_rewind: true use_slots: true retry_timeout: 10 ttl: 30 method: restore_or_initdb post_init: /etc/timescaledb/scripts/post_init.sh restore_or_initdb: command: | /etc/timescaledb/scripts/restore_or_initdb.sh --encoding=UTF8 --locale=C.UTF-8 keep_existing_recovery_conf: true kubernetes: role_label: role scope_label: cluster-name use_endpoints: true log: level: WARNING postgresql: authentication: replication: username: standby superuser: username: postgres basebackup: - waldir: /var/lib/postgresql/wal/pg_wal callbacks: on_reload: /etc/timescaledb/scripts/patroni_callback.sh on_restart: /etc/timescaledb/scripts/patroni_callback.sh on_role_change: /etc/timescaledb/scripts/patroni_callback.sh on_start: /etc/timescaledb/scripts/patroni_callback.sh on_stop: /etc/timescaledb/scripts/patroni_callback.sh create_replica_methods: - pgbackrest - basebackup listen: 0.0.0.0:5432 pg_hba: - local all postgres peer - local all all md5 - hostnossl all,replication all all reject - hostssl all all 127.0.0.1/32 md5 - hostssl all all ::1/128 md5 - hostssl replication standby all md5 - hostssl all all all md5 pgbackrest: command: /etc/timescaledb/scripts/pgbackrest_restore.sh keep_data: true no_master: true no_params: true recovery_conf: restore_command: /etc/timescaledb/scripts/pgbackrest_archive_get.sh %f "%p" use_unix_socket: true restapi: listen: 0.0.0.0:8008 ... --- # Source: timescaledb-single/templates/configmap-pgbackrest.yaml apiVersion: v1 kind: ConfigMap metadata: name: test-pgbackrest namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: pgbackrest data: pgbackrest.conf: | [global] compress-level=3 compress-type=lz4 process-max=4 repo1-cipher-type=none repo1-path=/ingress-nginx/test/ repo1-retention-diff=2 repo1-retention-full=2 repo1-s3-endpoint=s3.amazonaws.com repo1-s3-region=us-east-2 repo1-type=s3 spool-path=/var/run/postgresql start-fast=y [poddb] pg1-port=5432 pg1-host-user=postgres pg1-path=/var/lib/postgresql/data pg1-socket-path=/var/run/postgresql link-all=y [global:archive-push] [global:archive-get] ... --- # Source: timescaledb-single/templates/configmap-scripts.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license.--- apiVersion: v1 kind: ConfigMap metadata: name: test-scripts namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: scripts data: tstune.sh: |- #!/bin/sh set -eu # Exit if required variable is not set externally : "$TSTUNE_FILE" : "$WAL_VOLUME_SIZE" : "$DATA_VOLUME_SIZE" : "$RESOURCES_CPU_REQUESTS" : "$RESOURCES_MEMORY_REQUESTS" : "$RESOURCES_CPU_LIMIT" : "$RESOURCES_MEMORY_LIMIT" # Figure out how many cores are available CPUS="$RESOURCES_CPU_REQUESTS" if [ "$RESOURCES_CPU_REQUESTS" -eq 0 ]; then CPUS="${RESOURCES_CPU_LIMIT}" fi # Figure out how much memory is available MEMORY="$RESOURCES_MEMORY_REQUESTS" if [ "$RESOURCES_MEMORY_REQUESTS" -eq 0 ]; then MEMORY="${RESOURCES_MEMORY_LIMIT}" fi # Ensure tstune config file exists touch "${TSTUNE_FILE}" # Ensure tstune-generated config is included in postgresql.conf if [ -f "${PGDATA}/postgresql.base.conf" ] && ! grep "include_if_exists = '${TSTUNE_FILE}'" postgresql.base.conf -qxF; then echo "include_if_exists = '${TSTUNE_FILE}'" >> "${PGDATA}/postgresql.base.conf" fi # If there is a dedicated WAL Volume, we want to set max_wal_size to 60% of that volume # If there isn't a dedicated WAL Volume, we set it to 20% of the data volume if [ "${WAL_VOLUME_SIZE}" = "0" ]; then WALMAX="${DATA_VOLUME_SIZE}" WALPERCENT=20 else WALMAX="${WAL_VOLUME_SIZE}" WALPERCENT=60 fi WALMAX=$(numfmt --from=auto "${WALMAX}") # Wal segments are 16MB in size, in this way we get a "nice" number of the nearest # 16MB # walmax / 100 * walpercent / 16MB # below is a refactored with increased precision WALMAX=$(( WALMAX * WALPERCENT * 16 / 16777216 / 100 )) WALMIN=$(( WALMAX / 2 )) echo "max_wal_size=${WALMAX}MB" >> "${TSTUNE_FILE}" echo "min_wal_size=${WALMIN}MB" >> "${TSTUNE_FILE}" # Run tstune timescaledb-tune -quiet -conf-path "${TSTUNE_FILE}" -cpus "${CPUS}" -memory "${MEMORY}MB" -yes "$@" pgbackrest_archive.sh: |- #!/bin/sh # If no backup is configured, archive_command would normally fail. A failing archive_command on a cluster # is going to cause WAL to be kept around forever, meaning we'll fill up Volumes we have quite quickly. # # Therefore, if the backup is disabled, we always return exitcode 0 when archiving log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - archive - $1" } [ -z "$1" ] && log "Usage: $0 " && exit 1 : "${ENV_FILE:=${HOME}/.pgbackrest_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi # PGBACKREST_BACKUP_ENABLED variable is passed in StatefulSet template [ "${PGBACKREST_BACKUP_ENABLED}" = "true" ] || exit 0 exec pgbackrest --stanza=poddb archive-push "$@" pgbackrest_archive_get.sh: |- #!/bin/sh # PGBACKREST_BACKUP_ENABLED variable is passed in StatefulSet template [ "${PGBACKREST_BACKUP_ENABLED}" = "true" ] || exit 1 : "${ENV_FILE:=${HOME}/.pgbackrest_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi exec pgbackrest --stanza=poddb archive-get "${1}" "${2}" pgbackrest_bootstrap.sh: |- #!/bin/sh set -e log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - bootstrap - $1" } terminate() { log "Stopping" exit 1 } # If we don't catch these signals, and we're still waiting for PostgreSQL # to be ready, we will not respond at all to a regular shutdown request, # therefore, we explicitly terminate if we receive these signals. trap terminate TERM QUIT while ! pg_isready -q; do log "Waiting for PostgreSQL to become available" sleep 3 done # We'll be lazy; we wait for another while to allow the database to promote # to primary if it's the only one running sleep 10 # If we are the primary, we want to create/validate the backup stanza if [ "$(psql -c "SELECT pg_is_in_recovery()::text" -AtXq)" = "false" ]; then pgbackrest check || { log "Creating pgBackrest stanza" pgbackrest --stanza=poddb stanza-create --log-level-stderr=info || exit 1 log "Creating initial backup" pgbackrest --type=full backup || exit 1 } fi log "Starting pgBackrest api to listen for backup requests" exec python3 /scripts/pgbackrest-rest.py --stanza=poddb --loglevel=debug pgbackrest_restore.sh: | #!/bin/sh # PGBACKREST_BACKUP_ENABLED variable is passed in StatefulSet template [ "${PGBACKREST_BACKUP_ENABLED}" = "true" ] || exit 1 : "${ENV_FILE:=${HOME}/.pod_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi # PGDATA and WALDIR are set in the StatefulSet template and are sourced from the ENV_FILE # PGDATA= # WALDIR= # A missing PGDATA points to Patroni removing a botched PGDATA, or manual # intervention. In this scenario, we need to recreate the DATA and WALDIRs # to keep pgBackRest happy [ -d "${PGDATA}" ] || install -o postgres -g postgres -d -m 0700 "${PGDATA}" [ -d "${WALDIR}" ] || install -o postgres -g postgres -d -m 0700 "${WALDIR}" exec pgbackrest --force --delta --log-level-console=detail restore restore_or_initdb.sh: | #!/bin/sh : "${ENV_FILE:=${HOME}/.pod_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - restore_or_initdb - $1" } # PGDATA and WALDIR are set in the StatefulSet template and are sourced from the ENV_FILE # PGDATA= # WALDIR= # A missing PGDATA points to Patroni removing a botched PGDATA, or manual # intervention. In this scenario, we need to recreate the DATA and WALDIRs # to keep pgBackRest happy [ -d "${PGDATA}" ] || install -o postgres -g postgres -d -m 0700 "${PGDATA}" [ -d "${WALDIR}" ] || install -o postgres -g postgres -d -m 0700 "${WALDIR}" if [ "${BOOTSTRAP_FROM_BACKUP}" = "1" ]; then log "Attempting restore from backup" # we want to override the environment with the environment # shellcheck disable=SC2046 export $(env -i envdir /etc/pgbackrest/bootstrap env) > /dev/null # PGBACKREST_REPO1_PATH is set in the StatefulSet template and sourced from the ENV_FILE if [ -z "${PGBACKREST_REPO1_PATH}" ]; then log "Unconfigured repository path" cat << "__EOT__" TimescaleDB Single Helm Chart error: You should configure the bootstrapFromBackup in your Helm Chart section by explicitly setting the repo1-path to point to the backups. For more information, consult the admin guide: https://github.com/timescale/helm-charts/blob/main/charts/timescaledb-single/docs/admin-guide.md#bootstrap-from-backup __EOT__ exit 1 fi log "Listing available backup information" pgbackrest info EXITCODE=$? if [ ${EXITCODE} -ne 0 ]; then exit $EXITCODE fi pgbackrest --log-level-console=detail restore EXITCODE=$? if [ ${EXITCODE} -eq 0 ]; then log "pgBackRest restore finished succesfully, starting instance in recovery" # We want to ensure we do not overwrite a current backup repository with archives, therefore # we block archiving from succeeding until Patroni can takeover touch "${PGDATA}/recovery.signal" pg_ctl -D "${PGDATA}" start -o '--archive-command=/bin/false' while ! pg_isready -q; do log "Waiting for PostgreSQL to become available" sleep 3 done # It is not trivial to figure out to what point we should restore, pgBackRest # should be fetching WAL segments until the WAL is exhausted. We'll ask pgBackRest # what the Maximum Wal is that it currently has; as soon as we see that, we can consider # the restore to be done while true; do MAX_BACKUP_WAL="$(pgbackrest info --output=json | python3 -c "import json,sys;obj=json.load(sys.stdin); print(obj[0]['archive'][0]['max']);")" log "Testing whether WAL file ${MAX_BACKUP_WAL} has been restored ..." [ -f "${PGDATA}/pg_wal/${MAX_BACKUP_WAL}" ] && break sleep 30; done # At this point we know the final WAL archive has been restored, we should be done. log "The WAL file ${MAX_BACKUP_WAL} has been successully restored, shutting down instance" pg_ctl -D "${PGDATA}" promote pg_ctl -D "${PGDATA}" stop -m fast log "Handing over control to Patroni ..." else log "Bootstrap from backup failed" exit 1 fi else # Patroni attaches --scope and --datadir to the arguments, we need to strip them off as # initdb has no business with these parameters initdb_args="" for value in "$@" do case $value in "--scope"*) ;; "--datadir"*) ;; *) initdb_args="${initdb_args} $value" ;; esac done log "Invoking initdb" # shellcheck disable=SC2086 initdb --auth-local=peer --auth-host=md5 --pgdata="${PGDATA}" --waldir="${WALDIR}" ${initdb_args} fi echo "include_if_exists = '${TSTUNE_FILE}'" >> "${PGDATA}/postgresql.conf" post_init.sh: |- #!/bin/sh : "${ENV_FILE:=${HOME}/.pod_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - post_init - $1" } log "Creating extension TimescaleDB in template1 and postgres databases" psql -d "$URL" <<__SQL__ \connect template1 -- As we're still only initializing, we cannot have synchronous_commit enabled just yet. SET synchronous_commit to 'off'; CREATE EXTENSION timescaledb; \connect postgres SET synchronous_commit to 'off'; CREATE EXTENSION timescaledb; __SQL__ # POSTGRES_TABLESPACES is a comma-separated list of tablespaces to create # variable is passed in StatefulSet template : "${POSTGRES_TABLESPACES:=""}" for tablespace in $POSTGRES_TABLESPACES do log "Creating tablespace ${tablespace}" tablespacedir="${PGDATA}/tablespaces/${tablespace}/data" psql -d "$URL" --set tablespace="${tablespace}" --set directory="${tablespacedir}" --set ON_ERROR_STOP=1 <<__SQL__ SET synchronous_commit to 'off'; CREATE TABLESPACE :"tablespace" LOCATION :'directory'; __SQL__ done # This directory may contain user defined post init steps for file in /etc/timescaledb/post_init.d/* do [ -d "$file" ] && continue [ ! -r "$file" ] && continue case "$file" in *.sh) if [ -x "$file" ]; then log "Call post init script [ $file ]" "$file" "$@" EXITCODE=$? else log "Source post init script [ $file ]" . "$file" EXITCODE=$? fi ;; *.sql) log "Apply post init sql [ $file ]" # Disable synchronous_commit since we're initializing PGOPTIONS="-c synchronous_commit=local" psql -d "$URL" -f "$file" EXITCODE=$? ;; *.sql.gz) log "Decompress and apply post init sql [ $file ]" gunzip -c "$file" | PGOPTIONS="-c synchronous_commit=local" psql -d "$URL" EXITCODE=$? ;; *) log "Ignore unknown post init file type [ $file ]" EXITCODE=0 ;; esac EXITCODE=$? if [ "$EXITCODE" != "0" ] then log "ERROR: post init script $file exited with exitcode $EXITCODE" exit $EXITCODE fi done # We exit 0 this script, otherwise the database initialization fails. exit 0 patroni_callback.sh: |- #!/bin/sh set -e : "${ENV_FILE:=${HOME}/.pod_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi for suffix in "$1" all do CALLBACK="/etc/timescaledb/callbacks/${suffix}" if [ -f "${CALLBACK}" ] then "${CALLBACK}" "$@" fi done lifecycle_preStop.sql: |- -- Doing a checkpoint (at the primary and the current instance) before starting -- the shutdown process will speed up the CHECKPOINT that is part of the shutdown -- process and the recovery after the pod is rescheduled. -- -- We issue the CHECKPOINT at the primary always because: -- -- > Restartpoints can't be performed more frequently than checkpoints in the -- > master because restartpoints can only be performed at checkpoint records. -- https://www.postgresql.org/docs/current/wal-configuration.html -- -- While we're doing these preStop CHECKPOINTs we can still serve read/write -- queries to clients, whereas as soon as we initiate the shutdown, we terminate -- connections. -- -- This therefore reduces downtime for the clients, at the cost of increasing (slightly) -- the time to stop the pod, and reducing write performance on the primary. -- -- To further reduce downtime for clients, we will issue a switchover iff we are currently -- running as the primary. This again should be relatively fast, as we've just issued and -- waited for the CHECKPOINT to complete. -- -- This is quite a lot of logic and work in a preStop command; however, if the preStop command -- fails for whatever reason, the normal Pod shutdown will commence, so it is only able to -- improve stuff without being able to break stuff. -- (The $(hostname) inside the switchover call safeguards that we never accidentally -- switchover the wrong primary). \pset pager off \set ON_ERROR_STOP true \set hostname `hostname` \set dsn_fmt 'user=postgres host=%s application_name=lifecycle:preStop@%s connect_timeout=5 options=''-c log_min_duration_statement=0''' SELECT pg_is_in_recovery() AS in_recovery, format(:'dsn_fmt', patroni_scope, :'hostname') AS primary_dsn, format(:'dsn_fmt', '/var/run/postgresql', :'hostname') AS local_dsn FROM current_setting('cluster_name') AS cs(patroni_scope) \gset \timing on \set ECHO queries -- There should be a CHECKPOINT at the primary \if :in_recovery \connect :"primary_dsn" CHECKPOINT; \endif -- There should also be a CHECKPOINT locally, -- for the primary, this may mean we do a double checkpoint, -- but the second one would be cheap anyway, so we leave that as is \connect :"local_dsn" SELECT 'Issuing checkpoint'; CHECKPOINT; \if :in_recovery SELECT 'We are a replica: Successfully invoked checkpoints at the primary and locally.'; \else SELECT 'We are a primary: Successfully invoked checkpoints, now issuing a switchover.'; \! curl -s http://localhost:8008/switchover -XPOST -d '{"leader": "$(hostname)"}' \endif ... --- # Source: timescaledb-single/templates/role-timescaledb.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: test namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: rbac rules: - apiGroups: [""] resources: ["configmaps"] verbs: - create - get - list - patch - update - watch # delete is required only for 'patronictl remove' - delete - apiGroups: [""] resources: - endpoints - endpoints/restricted verbs: - create - get - patch - update # the following three privileges are necessary only when using endpoints - list - watch # delete is required only for for 'patronictl remove' - delete - apiGroups: [""] resources: ["pods"] verbs: - get - list - patch - update - watch --- # Source: timescaledb-single/templates/rolebinding-timescaledb.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: test namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: rbac subjects: - kind: ServiceAccount name: test roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: test --- # Source: timescaledb-single/templates/svc-timescaledb-config.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Service metadata: name: test-config namespace: ingress-nginx labels: component: patroni app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: patroni spec: selector: app: test cluster-name: test type: ClusterIP clusterIP: None ports: - name: patroni port: 8008 protocol: TCP --- # Source: timescaledb-single/templates/svc-timescaledb-replica.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Service metadata: name: test-replica namespace: ingress-nginx labels: component: postgres role: replica app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: postgres spec: selector: app: test cluster-name: test role: replica type: ClusterIP ports: - name: postgresql # This always defaults to 5432 port: 5432 targetPort: postgresql protocol: TCP --- # Source: timescaledb-single/templates/svc-timescaledb.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Service metadata: name: test namespace: ingress-nginx labels: role: master app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: timescaledb spec: selector: app: test cluster-name: test role: master type: ClusterIP ports: - name: postgresql # This always defaults to 5432 port: 5432 targetPort: postgresql protocol: TCP --- # Source: timescaledb-single/templates/statefulset-timescaledb.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: apps/v1 kind: StatefulSet metadata: name: test namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: timescaledb spec: serviceName: test replicas: 3 podManagementPolicy: OrderedReady updateStrategy: type: RollingUpdate selector: matchLabels: app: test release: test template: metadata: name: test labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: timescaledb spec: serviceAccountName: test securityContext: # The postgres user inside the TimescaleDB image has uid=1000. # This configuration ensures the permissions of the mounts are suitable fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 initContainers: - name: tstune securityContext: allowPrivilegeEscalation: false image: "timescale/timescaledb-ha:pg14.5-ts2.8.1-p1" env: - name: TSTUNE_FILE value: /var/run/postgresql/timescaledb.conf - name: WAL_VOLUME_SIZE value: 1Gi - name: DATA_VOLUME_SIZE value: 2Gi - name: RESOURCES_CPU_REQUESTS valueFrom: resourceFieldRef: containerName: timescaledb resource: requests.cpu divisor: "1" - name: RESOURCES_MEMORY_REQUESTS valueFrom: resourceFieldRef: containerName: timescaledb resource: requests.memory divisor: 1Mi - name: RESOURCES_CPU_LIMIT valueFrom: resourceFieldRef: containerName: timescaledb resource: limits.cpu divisor: "1" - name: RESOURCES_MEMORY_LIMIT valueFrom: resourceFieldRef: containerName: timescaledb resource: limits.memory divisor: 1Mi # Command below will run the timescaledb-tune utility and configure min/max wal size based on PVCs size command: - sh - "-c" - '/etc/timescaledb/scripts/tstune.sh ' volumeMounts: - name: socket-directory mountPath: /var/run/postgresql - name: timescaledb-scripts mountPath: /etc/timescaledb/scripts readOnly: true resources: limits: cpu: 1500m memory: 3Gi requests: cpu: 1000m memory: 2Gi # Issuing the final checkpoints on a busy database may take considerable time. # Unfinished checkpoints will require more time during startup, so the tradeoff # here is time spent in shutdown/time spent in startup. # We choose shutdown here, especially as during the largest part of the shutdown # we can still serve clients. terminationGracePeriodSeconds: 600 containers: - name: timescaledb securityContext: allowPrivilegeEscalation: false image: "timescale/timescaledb-ha:pg14.5-ts2.8.1-p1" imagePullPolicy: Always lifecycle: preStop: exec: command: - psql - -X - --file - "/etc/timescaledb/scripts/lifecycle_preStop.sql" # When reusing an already existing volume it sometimes happens that the permissions # of the PGDATA and/or wal directory are incorrect. To guard against this, we always correctly # set the permissons of these directories before we hand over to Patroni. # We also create all the tablespaces that are defined, to ensure a smooth restore/recovery on a # pristine set of Volumes. # As PostgreSQL requires to have full control over the permissions of the tablespace directories, # we create a subdirectory "data" in every tablespace mountpoint. The full path of every tablespace # therefore always ends on "/data". # By creating a .pgpass file in the $HOME directory, we expose the superuser password # to processes that may not have it in their environment (like the preStop lifecycle hook). # To ensure Patroni will not mingle with this file, we give Patroni its own pgpass file. # As these files are in the $HOME directory, they are only available to *this* container, # and they are ephemeral. command: - /bin/bash - "-c" - | install -o postgres -g postgres -d -m 0700 "/var/lib/postgresql/data" "/var/lib/postgresql/wal/pg_wal" || exit 1 TABLESPACES="" for tablespace in ; do install -o postgres -g postgres -d -m 0700 "/var/lib/postgresql/tablespaces/${tablespace}/data" done # Environment variables can be read by regular users of PostgreSQL. Especially in a Kubernetes # context it is likely that some secrets are part of those variables. # To ensure we expose as little as possible to the underlying PostgreSQL instance, we have a list # of allowed environment variable patterns to retain. # # We need the KUBERNETES_ environment variables for the native Kubernetes support of Patroni to work. # # NB: Patroni will remove all PATRONI_.* environment variables before starting PostgreSQL # We store the current environment, as initscripts, callbacks, archive_commands etc. may require # to have the environment available to them set -o posix export -p > "${HOME}/.pod_environment" export -p | grep PGBACKREST > "${HOME}/.pgbackrest_environment" for UNKNOWNVAR in $(env | awk -F '=' '!/^(PATRONI_.*|HOME|PGDATA|PGHOST|LC_.*|LANG|PATH|KUBERNETES_SERVICE_.*|AWS_ROLE_ARN|AWS_WEB_IDENTITY_TOKEN_FILE)=/ {print $1}') do unset "${UNKNOWNVAR}" done touch /var/run/postgresql/timescaledb.conf touch /var/run/postgresql/wal_status echo "*:*:*:postgres:${PATRONI_SUPERUSER_PASSWORD}" >> ${HOME}/.pgpass chmod 0600 ${HOME}/.pgpass export PATRONI_POSTGRESQL_PGPASS="${HOME}/.pgpass.patroni" exec patroni /etc/timescaledb/patroni.yaml env: # We use mixed case environment variables for Patroni User management, # as the variable themselves are documented to be PATRONI__OPTIONS. # Where possible, we want to have lowercase usernames in PostgreSQL as more complex postgres usernames # requiring quoting to be done in certain contexts, which many tools do not do correctly, or even at all. # https://patroni.readthedocs.io/en/latest/ENVIRONMENT.html#bootstrap-configuration - name: PATRONI_admin_OPTIONS value: createrole,createdb - name: PATRONI_REPLICATION_USERNAME--- I am closing this issue value: standby # To specify the PostgreSQL and Rest API connect addresses we need # the PATRONI_KUBERNETES_POD_IP to be available as a bash variable, so we can compose an # IP:PORT address later on - name: PATRONI_KUBERNETES_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: PATRONI_POSTGRESQL_CONNECT_ADDRESS value: "$(PATRONI_KUBERNETES_POD_IP):5432" - name: PATRONI_RESTAPI_CONNECT_ADDRESS value: "$(PATRONI_KUBERNETES_POD_IP):8008" - name: PATRONI_KUBERNETES_PORTS value: '[{"name": "postgresql", "port": 5432}]' - name: PATRONI_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: PATRONI_POSTGRESQL_DATA_DIR value: "/var/lib/postgresql/data" - name: PATRONI_KUBERNETES_NAMESPACE value: ingress-nginx - name: PATRONI_KUBERNETES_LABELS value: "{app: test, cluster-name: test, release: test}" - name: PATRONI_SCOPE value: test - name: PGBACKREST_CONFIG value: /etc/pgbackrest/pgbackrest.conf # PGDATA and PGHOST are not required to let Patroni/PostgreSQL run correctly, # but for interactive sessions, callbacks and PostgreSQL tools they should be correct. - name: PGDATA value: "$(PATRONI_POSTGRESQL_DATA_DIR)" - name: PGHOST value: "/var/run/postgresql" - name: WALDIR value: "/var/lib/postgresql/wal/pg_wal" - name: BOOTSTRAP_FROM_BACKUP value: "0" - name: PGBACKREST_BACKUP_ENABLED value: "false" - name: TSTUNE_FILE value: /var/run/postgresql/timescaledb.conf # pgBackRest is also called using the archive_command if the backup is enabled. # this script will also need access to the environment variables specified for # the backup. This can be removed once we do not directly invoke pgBackRest # from inside the TimescaleDB container anymore envFrom: - secretRef: name: "test-credentials" optional: false - secretRef: name: "test-pgbackrest" optional: true ports: - containerPort: 8008 name: patroni - containerPort: 5432 name: postgresql readinessProbe: exec: command: - pg_isready - -h - /var/run/postgresql initialDelaySeconds: 5 periodSeconds: 30 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 volumeMounts: - name: storage-volume mountPath: "/var/lib/postgresql" subPath: "" - name: wal-volume mountPath: "/var/lib/postgresql/wal" subPath: "" - mountPath: /etc/timescaledb/patroni.yaml subPath: patroni.yaml name: patroni-config readOnly: true - mountPath: /etc/timescaledb/scripts name: timescaledb-scripts readOnly: true - mountPath: "/etc/timescaledb/post_init.d" name: post-init readOnly: true - mountPath: /etc/certificate name: certificate readOnly: true - name: socket-directory mountPath: /var/run/postgresql - mountPath: /etc/pgbackrest name: pgbackrest readOnly: true - mountPath: /etc/pgbackrest/bootstrap name: pgbackrest-bootstrap readOnly: true resources: limits: cpu: 1500m memory: 3Gi requests: cpu: 1000m memory: 2Gi affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: topologyKey: "kubernetes.io/hostname" labelSelector: matchLabels: app: test release: "test" cluster-name: test - weight: 50 podAffinityTerm: topologyKey: failure-domain.beta.kubernetes.io/zone labelSelector: matchLabels: app: test release: "test" cluster-name: test volumes: - name: socket-directory emptyDir: {} - name: patroni-config configMap: name: test-patroni - name: timescaledb-scripts configMap: name: test-scripts defaultMode: 488 # 0750 permissions - name: post-init projected: defaultMode: 0750 sources: - configMap: name: custom-init-scripts optional: true - secret: name: custom-secret-scripts optional: true - name: pgbouncer configMap: name: test-pgbouncer defaultMode: 416 # 0640 permissions optional: true - name: pgbackrest configMap: name: test-pgbackrest defaultMode: 416 # 0640 permissions optional: true - name: certificate secret: secretName: "test-certificate" defaultMode: 416 # 0640 permissions - name: pgbackrest-bootstrap secret: secretName: pgbackrest-bootstrap optional: True volumeClaimTemplates: - metadata: name: storage-volume annotations: labels: app: test release: test heritage: Helm cluster-name: test purpose: data-directory spec: accessModes: - ReadWriteOnce resources: requests: storage: "2Gi" - metadata: name: wal-volume annotations: labels: app: test release: test heritage: Helm cluster-name: test purpose: wal-directory spec: accessModes: - ReadWriteOnce resources: requests: storage: "1Gi" --- # Source: timescaledb-single/templates/configmap-pgbackrest.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. --- # Source: timescaledb-single/templates/configmap-pgbouncer.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. --- # Source: timescaledb-single/templates/pgbackrest.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. --- # Source: timescaledb-single/templates/secret-certificate.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Secret metadata: name: "test-certificate" namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: certificates annotations: "helm.sh/hook": pre-install,post-delete "helm.sh/hook-weight": "0" type: kubernetes.io/tls stringData: tls.crt: "-----BEGIN CERTIFICATE-----\nMIIDCTCCAfGgAwIBAgIQXT1DGBxPT+WbvzFPN8UdZDANBgkqhkiG9w0BAQsFADAP\nMQ0wCwYDVQQDEwR0ZXN0MB4XDTIzMDEwNjA5MjMxMFoXDTI4MDEwNjA5MjMxMFow\nDzENMAsGA1UEAxMEdGVzdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB\nAMpzK6ZmlgHn0c5e6Ha/THlrPsvsFKoeT//DRNH+xY0xPZFH9QQKGUIfptDtSBjf\nznt+rGKu/YDjI2kcnIYeskEGAkL/6T8gLMnE63YMUXEkZWNTSxp4X1+K4EF545Sv\nUljksqSMhCKhwpe71gNsENQZkaJyrpAtZcmyc8A1QDsIt9rHo/0JJoBGU29pJq75\nMlkKa9kU2KBDRpknBMreOAXGzQ6qAsGOHp9jcse3G96szsWzM6v0HNOhhjxqi86F\nWTebmtexEu2FrTHx71wdwjpvZsrlPaxU6VqJcLksAmdG3WpT8NIZ8cCj+FfgbXzD\naDiIw0HZUaobG6K1x4dNOTsCAwEAAaNhMF8wDgYDVR0PAQH/BAQDAgKkMB0GA1Ud\nJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud\nDgQWBBSNdwBhRgcahTx4M4A44wjTq+3c3jANBgkqhkiG9w0BAQsFAAOCAQEAW7Q7\nxHBOzsMYlbCQw9FpOXDrzmK6mLpiibf++bsJrgB+yAqWFXlGs3i+HlLjLmYAyh6/\ntyrG2QlfNDeht+dAPc2Qfbt3f+MV/9XonFNh0/CwmlcK4z6jvU7ce0tnr7p3TiMQ\nC2rMrymx5vA9VkM7OMuQUEGbQvRIIKw/qnQ9YIqpd75nV/Gs6bGFIgjZesgn9QXH\nQEWt9A9tOq58UbOV0R4hZhjeXfmmGF0vSepXwEeHSUmG8KRc05ulzMECiY7ztdeR\nRZHCyZflKXDHNf5sQEo42ADfqwVoa/pEeJAobZOF59RkPIAJtG+A+4UwiANCs5Zc\nXGB859cg7/cmTp99gw==\n-----END CERTIFICATE-----\n" tls.key: "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEAynMrpmaWAefRzl7odr9MeWs+y+wUqh5P/8NE0f7FjTE9kUf1\nBAoZQh+m0O1IGN/Oe36sYq79gOMjaRychh6yQQYCQv/pPyAsycTrdgxRcSRlY1NL\nGnhfX4rgQXnjlK9SWOSypIyEIqHCl7vWA2wQ1BmRonKukC1lybJzwDVAOwi32sej\n/QkmgEZTb2kmrvkyWQpr2RTYoENGmScEyt44BcbNDqoCwY4en2Nyx7cb3qzOxbMz\nq/Qc06GGPGqLzoVZN5ua17ES7YWtMfHvXB3COm9myuU9rFTpWolwuSwCZ0bdalPw\n0hnxwKP4V+BtfMNoOIjDQdlRqhsborXHh005OwIDAQABAoIBAHkUgJq46Cajmxuu\nL6I1r2s+9QPJYmKMVpRFGTfvA//53zSwsJ2F3K1reL2j7GbUFA5QKJGszvjy4A7R\nidu9KCczjM69d6bFe4QBPkIQA/WDKxBIlLZ0H7ZovM7sM2yNntaDkURQtgZwcI2H\nTewmCbqQwEVECZs5S5NiI1BliNDEyIETc34XXOnzNCVqMv3qSkwOhn5hMZAnNHNH\nMXVFumkoXAmla/fE0fatyiDKjqToSfbS2GPVzU7IJqe85E7YanXbDAUsAE2gvMVw\nY9Jvla8v+cAfcvHKqCoCnJjDY+K+LuJkWdmzCHLhKEKTFQM6otlMX2mw4U69fHHc\nnNS4wOECgYEA+oVd53uH/Lqa8FLA2IoMr3T/gKZdbpkAf1vpqJxJEe0XLchFlifi\nk3r5D2LDWiF79iqy6oZPdXuyBQcJHlBWAvE1GmZylUiEGvynzO+2l6WjlXIxIQDF\nHvow03elfn+Fs1SeiedTdkCfl5J8d2fW9FbV8h7f9STeaSV9o91bWK0CgYEAzuCp\nAfZlUf8KGmbWYyXog0RwXz1uB3MS/dMRzgGMWMKjTzbqq69FOewreUU0p79/LTRS\n+hMvPW2wMbXkmgMGwZUG8y60C093ORTISM8U2GZPCb/+kx0DOYIbmFk4lG2164Lq\npLc5hf1oO9unQyK/Aeyftv9+hxSr/ttrZ8NoDocCgYB2bXePx1jowzodY7FgbBpF\nE3T5VywR7WhLzKJvn7n3LHJppSQoMKCugVKd0F1zDSMxosvDjEyhyCDGuaW429dd\nOrOU0FtYcNhqfYfBnIxfseDb9Ah/hoKo+zL7tLLaUuRceyMbI+zTmQcYuxn1xHPc\nO/SVqbzLgWtWn29+eFUHXQKBgQCuMY89fso7q8NHDdZxL8dDWIpCN4iBL00LewFf\n8//H8UPvfG9G1tM0fX7xous+YElmt8sylJrPX5/fi6gMYoX61FBAzc9+QpBB+RTX\n8b48pJDixc5G80P21W4E7wNsP6DRyK9ouHrwLrroxABn0EcDCMpHHYTdmvNkKj+a\n5Hem2wKBgQDxaDz2J5tpJrzq+uSQRmL8Smd/Cls5NORkzXOcBHwh2TaIs3tZVr5f\ncANiPaYtVUdhBpH9KDUEyofhjJCtGl3qnt5FmHhNaww8eQ0gwuKWvYIphHowXBj2\nx3lq70y4qFC7gQ04I9aCNEB1N1bzNMjPzu9/YPqKyhk/Dq3GQLPkyA==\n-----END RSA PRIVATE KEY-----\n" ... --- # Source: timescaledb-single/templates/secret-patroni.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Secret metadata: name: "test-credentials" namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: patroni annotations: "helm.sh/hook": pre-install,post-delete "helm.sh/hook-weight": "0" "helm.sh/resource-policy": keep type: Opaque stringData: PATRONI_SUPERUSER_PASSWORD: "lRN5H13hdFKyppP2" PATRONI_REPLICATION_PASSWORD: "34bAdRZQFFrGwqjV" PATRONI_admin_PASSWORD: "yNeuXdLDtBH8YoAn" ... --- # Source: timescaledb-single/templates/secret-pgbackrest.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Secret metadata: name: "test-pgbackrest" namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: pgbackrest annotations: "helm.sh/hook": pre-install,post-delete "helm.sh/hook-weight": "0" "helm.sh/resource-policy": keep type: Opaque stringData: PGBACKREST_REPO1_S3_BUCKET: "" PGBACKREST_REPO1_S3_ENDPOINT: s3.amazonaws.com PGBACKREST_REPO1_S3_KEY: "" PGBACKREST_REPO1_S3_KEY_SECRET: "" PGBACKREST_REPO1_S3_REGION: "" ... --- # Source: timescaledb-single/templates/job-update-patroni.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: batch/v1 kind: Job metadata: name: "test-patroni-lg" namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: patroni annotations: "helm.sh/hook": post-upgrade "helm.sh/hook-delete-policy": hook-succeeded spec: activeDeadlineSeconds: 120 template: metadata: labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 spec: restartPolicy: OnFailure containers: - name: test-patch-patroni-config image: curlimages/curl command: ["/bin/sh"] # Patching the Patroni configuration is good, however it should not block an upgrade from going through # Therefore we ensure we always exit with an exitcode 0, so that Helm is satisfied with this upgrade job args: - '-c' - | /usr/bin/curl --connect-timeout 30 --include --request PATCH --data \ "{\"loop_wait\":10,\"maximum_lag_on_failover\":33554432,\"postgresql\":{\"parameters\":{\"archive_command\":\"/etc/timescaledb/scripts/pgbackrest_archive.sh %p\",\"archive_mode\":\"on\",\"archive_timeout\":\"1800s\",\"autovacuum_analyze_scale_factor\":0.02,\"autovacuum_max_workers\":10,\"autovacuum_naptime\":\"5s\",\"autovacuum_vacuum_cost_limit\":500,\"autovacuum_vacuum_scale_factor\":0.05,\"hot_standby\":\"on\",\"log_autovacuum_min_duration\":\"1min\",\"log_checkpoints\":\"on\",\"log_connections\":\"on\",\"log_disconnections\":\"on\",\"log_line_prefix\":\"%t [%p]: [%c-%l] %u@%d,app=%a [%e] \",\"log_lock_waits\":\"on\",\"log_min_duration_statement\":\"1s\",\"log_statement\":\"ddl\",\"max_connections\":100,\"max_prepared_transactions\":150,\"shared_preload_libraries\":\"timescaledb,pg_stat_statements\",\"ssl\":\"on\",\"ssl_cert_file\":\"/etc/certificate/tls.crt\",\"ssl_key_file\":\"/etc/certificate/tls.key\",\"tcp_keepalives_idle\":900,\"tcp_keepalives_interval\":100,\"temp_file_limit\":\"1GB\",\"timescaledb.passfile\":\"../.pgpass\",\"unix_socket_directories\":\"/var/run/postgresql\",\"unix_socket_permissions\":\"0750\",\"wal_level\":\"hot_standby\",\"wal_log_hints\":\"on\"},\"use_pg_rewind\":true,\"use_slots\":true},\"retry_timeout\":10,\"ttl\":30}" \ "http://test-config:8008/config" exit 0 ``` Relevant part of STS copied from above: ```yaml - mountPath: /etc/pgbackrest/bootstrap name: pgbackrest-bootstrap readOnly: true resources: limits: cpu: 1500m memory: 3Gi requests: cpu: 1000m memory: 2Gi affinity: podAntiAffinity: ```

As seen in the above, resources are set properly. I also tried to reproduce it with setting just limits as seen below. This also resulted in correct behavior of setting only limits:

console output ```console $ cat values-override.yaml resources: limits: cpu: 1500m memory: 3Gi ---------------------------------------------------------------------------------------------------------------------------------------------------------- $ helm template test . -f values.yaml -f values-override.yaml --- # Source: timescaledb-single/templates/serviceaccount-timescaledb.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: ServiceAccount metadata: name: test namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: rbac --- # Source: timescaledb-single/templates/configmap-patroni.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license.--- apiVersion: v1 kind: ConfigMap metadata: name: test-patroni namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: patroni data: patroni.yaml: | bootstrap: dcs: loop_wait: 10 maximum_lag_on_failover: 33554432 postgresql: parameters: archive_command: /etc/timescaledb/scripts/pgbackrest_archive.sh %p archive_mode: "on" archive_timeout: 1800s autovacuum_analyze_scale_factor: 0.02 autovacuum_max_workers: 10 autovacuum_naptime: 5s autovacuum_vacuum_cost_limit: 500 autovacuum_vacuum_scale_factor: 0.05 hot_standby: "on" log_autovacuum_min_duration: 1min log_checkpoints: "on" log_connections: "on" log_disconnections: "on" log_line_prefix: '%t [%p]: [%c-%l] %u@%d,app=%a [%e] ' log_lock_waits: "on" log_min_duration_statement: 1s log_statement: ddl max_connections: 100 max_prepared_transactions: 150 shared_preload_libraries: timescaledb,pg_stat_statements ssl: "on" ssl_cert_file: /etc/certificate/tls.crt ssl_key_file: /etc/certificate/tls.key tcp_keepalives_idle: 900 tcp_keepalives_interval: 100 temp_file_limit: 1GB timescaledb.passfile: ../.pgpass unix_socket_directories: /var/run/postgresql unix_socket_permissions: "0750" wal_level: hot_standby wal_log_hints: "on" use_pg_rewind: true use_slots: true retry_timeout: 10 ttl: 30 method: restore_or_initdb post_init: /etc/timescaledb/scripts/post_init.sh restore_or_initdb: command: | /etc/timescaledb/scripts/restore_or_initdb.sh --encoding=UTF8 --locale=C.UTF-8 keep_existing_recovery_conf: true kubernetes: role_label: role scope_label: cluster-name use_endpoints: true log: level: WARNING postgresql: authentication: replication: username: standby superuser: username: postgres basebackup: - waldir: /var/lib/postgresql/wal/pg_wal callbacks: on_reload: /etc/timescaledb/scripts/patroni_callback.sh on_restart: /etc/timescaledb/scripts/patroni_callback.sh on_role_change: /etc/timescaledb/scripts/patroni_callback.sh on_start: /etc/timescaledb/scripts/patroni_callback.sh on_stop: /etc/timescaledb/scripts/patroni_callback.sh create_replica_methods: - pgbackrest - basebackup listen: 0.0.0.0:5432 pg_hba: - local all postgres peer - local all all md5 - hostnossl all,replication all all reject - hostssl all all 127.0.0.1/32 md5 - hostssl all all ::1/128 md5 - hostssl replication standby all md5 - hostssl all all all md5 pgbackrest: command: /etc/timescaledb/scripts/pgbackrest_restore.sh keep_data: true no_master: true no_params: true recovery_conf: restore_command: /etc/timescaledb/scripts/pgbackrest_archive_get.sh %f "%p" use_unix_socket: true restapi: listen: 0.0.0.0:8008 ... --- # Source: timescaledb-single/templates/configmap-pgbackrest.yaml apiVersion: v1 kind: ConfigMap metadata: name: test-pgbackrest namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: pgbackrest data: pgbackrest.conf: | [global] compress-level=3 compress-type=lz4 process-max=4 repo1-cipher-type=none repo1-path=/ingress-nginx/test/ repo1-retention-diff=2 repo1-retention-full=2 repo1-s3-endpoint=s3.amazonaws.com repo1-s3-region=us-east-2 repo1-type=s3 spool-path=/var/run/postgresql start-fast=y [poddb] pg1-port=5432 pg1-host-user=postgres pg1-path=/var/lib/postgresql/data pg1-socket-path=/var/run/postgresql link-all=y [global:archive-push] [global:archive-get] ... --- # Source: timescaledb-single/templates/configmap-scripts.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license.--- apiVersion: v1 kind: ConfigMap metadata: name: test-scripts namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: scripts data: tstune.sh: |- #!/bin/sh set -eu # Exit if required variable is not set externally : "$TSTUNE_FILE" : "$WAL_VOLUME_SIZE" : "$DATA_VOLUME_SIZE" : "$RESOURCES_CPU_REQUESTS" : "$RESOURCES_MEMORY_REQUESTS" : "$RESOURCES_CPU_LIMIT" : "$RESOURCES_MEMORY_LIMIT" # Figure out how many cores are available CPUS="$RESOURCES_CPU_REQUESTS" if [ "$RESOURCES_CPU_REQUESTS" -eq 0 ]; then CPUS="${RESOURCES_CPU_LIMIT}" fi # Figure out how much memory is available MEMORY="$RESOURCES_MEMORY_REQUESTS" if [ "$RESOURCES_MEMORY_REQUESTS" -eq 0 ]; then MEMORY="${RESOURCES_MEMORY_LIMIT}" fi # Ensure tstune config file exists touch "${TSTUNE_FILE}" # Ensure tstune-generated config is included in postgresql.conf if [ -f "${PGDATA}/postgresql.base.conf" ] && ! grep "include_if_exists = '${TSTUNE_FILE}'" postgresql.base.conf -qxF; then echo "include_if_exists = '${TSTUNE_FILE}'" >> "${PGDATA}/postgresql.base.conf" fi # If there is a dedicated WAL Volume, we want to set max_wal_size to 60% of that volume # If there isn't a dedicated WAL Volume, we set it to 20% of the data volume if [ "${WAL_VOLUME_SIZE}" = "0" ]; then WALMAX="${DATA_VOLUME_SIZE}" WALPERCENT=20 else WALMAX="${WAL_VOLUME_SIZE}" WALPERCENT=60 fi WALMAX=$(numfmt --from=auto "${WALMAX}") # Wal segments are 16MB in size, in this way we get a "nice" number of the nearest # 16MB # walmax / 100 * walpercent / 16MB # below is a refactored with increased precision WALMAX=$(( WALMAX * WALPERCENT * 16 / 16777216 / 100 )) WALMIN=$(( WALMAX / 2 )) echo "max_wal_size=${WALMAX}MB" >> "${TSTUNE_FILE}" echo "min_wal_size=${WALMIN}MB" >> "${TSTUNE_FILE}" # Run tstune timescaledb-tune -quiet -conf-path "${TSTUNE_FILE}" -cpus "${CPUS}" -memory "${MEMORY}MB" -yes "$@" pgbackrest_archive.sh: |- #!/bin/sh # If no backup is configured, archive_command would normally fail. A failing archive_command on a cluster # is going to cause WAL to be kept around forever, meaning we'll fill up Volumes we have quite quickly. # # Therefore, if the backup is disabled, we always return exitcode 0 when archiving log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - archive - $1" } [ -z "$1" ] && log "Usage: $0 " && exit 1 : "${ENV_FILE:=${HOME}/.pgbackrest_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi # PGBACKREST_BACKUP_ENABLED variable is passed in StatefulSet template [ "${PGBACKREST_BACKUP_ENABLED}" = "true" ] || exit 0 exec pgbackrest --stanza=poddb archive-push "$@" pgbackrest_archive_get.sh: |- #!/bin/sh # PGBACKREST_BACKUP_ENABLED variable is passed in StatefulSet template [ "${PGBACKREST_BACKUP_ENABLED}" = "true" ] || exit 1 : "${ENV_FILE:=${HOME}/.pgbackrest_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi exec pgbackrest --stanza=poddb archive-get "${1}" "${2}" pgbackrest_bootstrap.sh: |- #!/bin/sh set -e log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - bootstrap - $1" } terminate() { log "Stopping" exit 1 } # If we don't catch these signals, and we're still waiting for PostgreSQL # to be ready, we will not respond at all to a regular shutdown request, # therefore, we explicitly terminate if we receive these signals. trap terminate TERM QUIT while ! pg_isready -q; do log "Waiting for PostgreSQL to become available" sleep 3 done # We'll be lazy; we wait for another while to allow the database to promote # to primary if it's the only one running sleep 10 # If we are the primary, we want to create/validate the backup stanza if [ "$(psql -c "SELECT pg_is_in_recovery()::text" -AtXq)" = "false" ]; then pgbackrest check || { log "Creating pgBackrest stanza" pgbackrest --stanza=poddb stanza-create --log-level-stderr=info || exit 1 log "Creating initial backup" pgbackrest --type=full backup || exit 1 } fi log "Starting pgBackrest api to listen for backup requests" exec python3 /scripts/pgbackrest-rest.py --stanza=poddb --loglevel=debug pgbackrest_restore.sh: | #!/bin/sh # PGBACKREST_BACKUP_ENABLED variable is passed in StatefulSet template [ "${PGBACKREST_BACKUP_ENABLED}" = "true" ] || exit 1 : "${ENV_FILE:=${HOME}/.pod_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi # PGDATA and WALDIR are set in the StatefulSet template and are sourced from the ENV_FILE # PGDATA= # WALDIR= # A missing PGDATA points to Patroni removing a botched PGDATA, or manual # intervention. In this scenario, we need to recreate the DATA and WALDIRs # to keep pgBackRest happy [ -d "${PGDATA}" ] || install -o postgres -g postgres -d -m 0700 "${PGDATA}" [ -d "${WALDIR}" ] || install -o postgres -g postgres -d -m 0700 "${WALDIR}" exec pgbackrest --force --delta --log-level-console=detail restore restore_or_initdb.sh: | #!/bin/sh : "${ENV_FILE:=${HOME}/.pod_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - restore_or_initdb - $1" } # PGDATA and WALDIR are set in the StatefulSet template and are sourced from the ENV_FILE # PGDATA= # WALDIR= # A missing PGDATA points to Patroni removing a botched PGDATA, or manual # intervention. In this scenario, we need to recreate the DATA and WALDIRs # to keep pgBackRest happy [ -d "${PGDATA}" ] || install -o postgres -g postgres -d -m 0700 "${PGDATA}" [ -d "${WALDIR}" ] || install -o postgres -g postgres -d -m 0700 "${WALDIR}" if [ "${BOOTSTRAP_FROM_BACKUP}" = "1" ]; then log "Attempting restore from backup" # we want to override the environment with the environment # shellcheck disable=SC2046 export $(env -i envdir /etc/pgbackrest/bootstrap env) > /dev/null # PGBACKREST_REPO1_PATH is set in the StatefulSet template and sourced from the ENV_FILE if [ -z "${PGBACKREST_REPO1_PATH}" ]; then log "Unconfigured repository path" cat << "__EOT__" TimescaleDB Single Helm Chart error: You should configure the bootstrapFromBackup in your Helm Chart section by explicitly setting the repo1-path to point to the backups. For more information, consult the admin guide: https://github.com/timescale/helm-charts/blob/main/charts/timescaledb-single/docs/admin-guide.md#bootstrap-from-backup __EOT__ exit 1 fi log "Listing available backup information" pgbackrest info EXITCODE=$? if [ ${EXITCODE} -ne 0 ]; then exit $EXITCODE fi pgbackrest --log-level-console=detail restore EXITCODE=$? if [ ${EXITCODE} -eq 0 ]; then log "pgBackRest restore finished succesfully, starting instance in recovery" # We want to ensure we do not overwrite a current backup repository with archives, therefore # we block archiving from succeeding until Patroni can takeover touch "${PGDATA}/recovery.signal" pg_ctl -D "${PGDATA}" start -o '--archive-command=/bin/false' while ! pg_isready -q; do log "Waiting for PostgreSQL to become available" sleep 3 done # It is not trivial to figure out to what point we should restore, pgBackRest # should be fetching WAL segments until the WAL is exhausted. We'll ask pgBackRest # what the Maximum Wal is that it currently has; as soon as we see that, we can consider # the restore to be done while true; do MAX_BACKUP_WAL="$(pgbackrest info --output=json | python3 -c "import json,sys;obj=json.load(sys.stdin); print(obj[0]['archive'][0]['max']);")" log "Testing whether WAL file ${MAX_BACKUP_WAL} has been restored ..." [ -f "${PGDATA}/pg_wal/${MAX_BACKUP_WAL}" ] && break sleep 30; done # At this point we know the final WAL archive has been restored, we should be done. log "The WAL file ${MAX_BACKUP_WAL} has been successully restored, shutting down instance" pg_ctl -D "${PGDATA}" promote pg_ctl -D "${PGDATA}" stop -m fast log "Handing over control to Patroni ..." else log "Bootstrap from backup failed" exit 1 fi else # Patroni attaches --scope and --datadir to the arguments, we need to strip them off as # initdb has no business with these parameters initdb_args="" for value in "$@" do case $value in "--scope"*) ;; "--datadir"*) ;; *) initdb_args="${initdb_args} $value" ;; esac done log "Invoking initdb" # shellcheck disable=SC2086 initdb --auth-local=peer --auth-host=md5 --pgdata="${PGDATA}" --waldir="${WALDIR}" ${initdb_args} fi echo "include_if_exists = '${TSTUNE_FILE}'" >> "${PGDATA}/postgresql.conf" post_init.sh: |- #!/bin/sh : "${ENV_FILE:=${HOME}/.pod_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - post_init - $1" } log "Creating extension TimescaleDB in template1 and postgres databases" psql -d "$URL" <<__SQL__ \connect template1 -- As we're still only initializing, we cannot have synchronous_commit enabled just yet. SET synchronous_commit to 'off'; CREATE EXTENSION timescaledb; \connect postgres SET synchronous_commit to 'off'; CREATE EXTENSION timescaledb; __SQL__ # POSTGRES_TABLESPACES is a comma-separated list of tablespaces to create # variable is passed in StatefulSet template : "${POSTGRES_TABLESPACES:=""}" for tablespace in $POSTGRES_TABLESPACES do log "Creating tablespace ${tablespace}" tablespacedir="${PGDATA}/tablespaces/${tablespace}/data" psql -d "$URL" --set tablespace="${tablespace}" --set directory="${tablespacedir}" --set ON_ERROR_STOP=1 <<__SQL__ SET synchronous_commit to 'off'; CREATE TABLESPACE :"tablespace" LOCATION :'directory'; __SQL__ done # This directory may contain user defined post init steps for file in /etc/timescaledb/post_init.d/* do [ -d "$file" ] && continue [ ! -r "$file" ] && continue case "$file" in *.sh) if [ -x "$file" ]; then log "Call post init script [ $file ]" "$file" "$@" EXITCODE=$? else log "Source post init script [ $file ]" . "$file" EXITCODE=$? fi ;; *.sql) log "Apply post init sql [ $file ]" # Disable synchronous_commit since we're initializing PGOPTIONS="-c synchronous_commit=local" psql -d "$URL" -f "$file" EXITCODE=$? ;; *.sql.gz) log "Decompress and apply post init sql [ $file ]" gunzip -c "$file" | PGOPTIONS="-c synchronous_commit=local" psql -d "$URL" EXITCODE=$? ;; *) log "Ignore unknown post init file type [ $file ]" EXITCODE=0 ;; esac EXITCODE=$? if [ "$EXITCODE" != "0" ] then log "ERROR: post init script $file exited with exitcode $EXITCODE" exit $EXITCODE fi done # We exit 0 this script, otherwise the database initialization fails. exit 0 patroni_callback.sh: |- #!/bin/sh set -e : "${ENV_FILE:=${HOME}/.pod_environment}" if [ -f "${ENV_FILE}" ]; then echo "Sourcing ${ENV_FILE}" . "${ENV_FILE}" fi for suffix in "$1" all do CALLBACK="/etc/timescaledb/callbacks/${suffix}" if [ -f "${CALLBACK}" ] then "${CALLBACK}" "$@" fi done lifecycle_preStop.sql: |- -- Doing a checkpoint (at the primary and the current instance) before starting -- the shutdown process will speed up the CHECKPOINT that is part of the shutdown -- process and the recovery after the pod is rescheduled. -- -- We issue the CHECKPOINT at the primary always because: -- -- > Restartpoints can't be performed more frequently than checkpoints in the -- > master because restartpoints can only be performed at checkpoint records. -- https://www.postgresql.org/docs/current/wal-configuration.html -- -- While we're doing these preStop CHECKPOINTs we can still serve read/write -- queries to clients, whereas as soon as we initiate the shutdown, we terminate -- connections. -- -- This therefore reduces downtime for the clients, at the cost of increasing (slightly) -- the time to stop the pod, and reducing write performance on the primary. -- -- To further reduce downtime for clients, we will issue a switchover iff we are currently -- running as the primary. This again should be relatively fast, as we've just issued and -- waited for the CHECKPOINT to complete. -- -- This is quite a lot of logic and work in a preStop command; however, if the preStop command -- fails for whatever reason, the normal Pod shutdown will commence, so it is only able to -- improve stuff without being able to break stuff. -- (The $(hostname) inside the switchover call safeguards that we never accidentally -- switchover the wrong primary). \pset pager off \set ON_ERROR_STOP true \set hostname `hostname` \set dsn_fmt 'user=postgres host=%s application_name=lifecycle:preStop@%s connect_timeout=5 options=''-c log_min_duration_statement=0''' SELECT pg_is_in_recovery() AS in_recovery, format(:'dsn_fmt', patroni_scope, :'hostname') AS primary_dsn, format(:'dsn_fmt', '/var/run/postgresql', :'hostname') AS local_dsn FROM current_setting('cluster_name') AS cs(patroni_scope) \gset \timing on \set ECHO queries -- There should be a CHECKPOINT at the primary \if :in_recovery \connect :"primary_dsn" CHECKPOINT; \endif -- There should also be a CHECKPOINT locally, -- for the primary, this may mean we do a double checkpoint, -- but the second one would be cheap anyway, so we leave that as is \connect :"local_dsn" SELECT 'Issuing checkpoint'; CHECKPOINT; \if :in_recovery SELECT 'We are a replica: Successfully invoked checkpoints at the primary and locally.'; \else SELECT 'We are a primary: Successfully invoked checkpoints, now issuing a switchover.'; \! curl -s http://localhost:8008/switchover -XPOST -d '{"leader": "$(hostname)"}' \endif ... --- # Source: timescaledb-single/templates/role-timescaledb.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: test namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: rbac rules: - apiGroups: [""] resources: ["configmaps"] verbs: - create - get - list - patch - update - watch # delete is required only for 'patronictl remove' - delete - apiGroups: [""] resources: - endpoints - endpoints/restricted verbs: - create - get - patch - update # the following three privileges are necessary only when using endpoints - list - watch # delete is required only for for 'patronictl remove' - delete - apiGroups: [""] resources: ["pods"] verbs: - get - list - patch - update - watch --- # Source: timescaledb-single/templates/rolebinding-timescaledb.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: test namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: rbac subjects: - kind: ServiceAccount name: test roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: test --- # Source: timescaledb-single/templates/svc-timescaledb-config.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Service metadata: name: test-config namespace: ingress-nginx labels: component: patroni app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: patroni spec: selector: app: test cluster-name: test type: ClusterIP clusterIP: None ports: - name: patroni port: 8008 protocol: TCP --- # Source: timescaledb-single/templates/svc-timescaledb-replica.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Service metadata: name: test-replica namespace: ingress-nginx labels: component: postgres role: replica app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: postgres spec: selector: app: test cluster-name: test role: replica type: ClusterIP ports: - name: postgresql # This always defaults to 5432 port: 5432 targetPort: postgresql protocol: TCP --- # Source: timescaledb-single/templates/svc-timescaledb.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Service metadata: name: test namespace: ingress-nginx labels: role: master app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: timescaledb spec: selector: app: test cluster-name: test role: master type: ClusterIP ports: - name: postgresql # This always defaults to 5432 port: 5432 targetPort: postgresql protocol: TCP --- # Source: timescaledb-single/templates/statefulset-timescaledb.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: apps/v1 kind: StatefulSet metadata: name: test namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: timescaledb spec: serviceName: test replicas: 3 podManagementPolicy: OrderedReady updateStrategy: type: RollingUpdate selector: matchLabels: app: test release: test template: metadata: name: test labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: timescaledb spec: serviceAccountName: test securityContext: # The postgres user inside the TimescaleDB image has uid=1000. # This configuration ensures the permissions of the mounts are suitable fsGroup: 1000 runAsGroup: 1000 runAsNonRoot: true runAsUser: 1000 initContainers: - name: tstune securityContext: allowPrivilegeEscalation: false image: "timescale/timescaledb-ha:pg14.5-ts2.8.1-p1" env: - name: TSTUNE_FILE value: /var/run/postgresql/timescaledb.conf - name: WAL_VOLUME_SIZE value: 1Gi - name: DATA_VOLUME_SIZE value: 2Gi - name: RESOURCES_CPU_REQUESTS valueFrom: resourceFieldRef: containerName: timescaledb resource: requests.cpu divisor: "1" - name: RESOURCES_MEMORY_REQUESTS valueFrom: resourceFieldRef: containerName: timescaledb resource: requests.memory divisor: 1Mi - name: RESOURCES_CPU_LIMIT valueFrom: resourceFieldRef: containerName: timescaledb resource: limits.cpu divisor: "1" - name: RESOURCES_MEMORY_LIMIT valueFrom: resourceFieldRef: containerName: timescaledb resource: limits.memory divisor: 1Mi # Command below will run the timescaledb-tune utility and configure min/max wal size based on PVCs size command: - sh - "-c" - '/etc/timescaledb/scripts/tstune.sh ' volumeMounts: - name: socket-directory mountPath: /var/run/postgresql - name: timescaledb-scripts mountPath: /etc/timescaledb/scripts readOnly: true resources: limits: cpu: 1500m memory: 3Gi # Issuing the final checkpoints on a busy database may take considerable time. # Unfinished checkpoints will require more time during startup, so the tradeoff # here is time spent in shutdown/time spent in startup. # We choose shutdown here, especially as during the largest part of the shutdown # we can still serve clients. terminationGracePeriodSeconds: 600 containers: - name: timescaledb securityContext: allowPrivilegeEscalation: false image: "timescale/timescaledb-ha:pg14.5-ts2.8.1-p1" imagePullPolicy: Always lifecycle: preStop: exec: command: - psql - -X - --file - "/etc/timescaledb/scripts/lifecycle_preStop.sql" # When reusing an already existing volume it sometimes happens that the permissions # of the PGDATA and/or wal directory are incorrect. To guard against this, we always correctly # set the permissons of these directories before we hand over to Patroni. # We also create all the tablespaces that are defined, to ensure a smooth restore/recovery on a # pristine set of Volumes. # As PostgreSQL requires to have full control over the permissions of the tablespace directories, # we create a subdirectory "data" in every tablespace mountpoint. The full path of every tablespace # therefore always ends on "/data". # By creating a .pgpass file in the $HOME directory, we expose the superuser password # to processes that may not have it in their environment (like the preStop lifecycle hook). # To ensure Patroni will not mingle with this file, we give Patroni its own pgpass file. # As these files are in the $HOME directory, they are only available to *this* container, # and they are ephemeral. command: - /bin/bash - "-c" - | install -o postgres -g postgres -d -m 0700 "/var/lib/postgresql/data" "/var/lib/postgresql/wal/pg_wal" || exit 1 TABLESPACES="" for tablespace in ; do install -o postgres -g postgres -d -m 0700 "/var/lib/postgresql/tablespaces/${tablespace}/data" done # Environment variables can be read by regular users of PostgreSQL. Especially in a Kubernetes # context it is likely that some secrets are part of those variables. # To ensure we expose as little as possible to the underlying PostgreSQL instance, we have a list # of allowed environment variable patterns to retain. # # We need the KUBERNETES_ environment variables for the native Kubernetes support of Patroni to work. # # NB: Patroni will remove all PATRONI_.* environment variables before starting PostgreSQL # We store the current environment, as initscripts, callbacks, archive_commands etc. may require # to have the environment available to them set -o posix export -p > "${HOME}/.pod_environment" export -p | grep PGBACKREST > "${HOME}/.pgbackrest_environment" for UNKNOWNVAR in $(env | awk -F '=' '!/^(PATRONI_.*|HOME|PGDATA|PGHOST|LC_.*|LANG|PATH|KUBERNETES_SERVICE_.*|AWS_ROLE_ARN|AWS_WEB_IDENTITY_TOKEN_FILE)=/ {print $1}') do unset "${UNKNOWNVAR}" done touch /var/run/postgresql/timescaledb.conf touch /var/run/postgresql/wal_status echo "*:*:*:postgres:${PATRONI_SUPERUSER_PASSWORD}" >> ${HOME}/.pgpass chmod 0600 ${HOME}/.pgpass export PATRONI_POSTGRESQL_PGPASS="${HOME}/.pgpass.patroni" exec patroni /etc/timescaledb/patroni.yaml env: # We use mixed case environment variables for Patroni User management, # as the variable themselves are documented to be PATRONI__OPTIONS. # Where possible, we want to have lowercase usernames in PostgreSQL as more complex postgres usernames # requiring quoting to be done in certain contexts, which many tools do not do correctly, or even at all. # https://patroni.readthedocs.io/en/latest/ENVIRONMENT.html#bootstrap-configuration - name: PATRONI_admin_OPTIONS value: createrole,createdb - name: PATRONI_REPLICATION_USERNAME value: standby # To specify the PostgreSQL and Rest API connect addresses we need # the PATRONI_KUBERNETES_POD_IP to be available as a bash variable, so we can compose an # IP:PORT address later on - name: PATRONI_KUBERNETES_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: PATRONI_POSTGRESQL_CONNECT_ADDRESS value: "$(PATRONI_KUBERNETES_POD_IP):5432" - name: PATRONI_RESTAPI_CONNECT_ADDRESS value: "$(PATRONI_KUBERNETES_POD_IP):8008" - name: PATRONI_KUBERNETES_PORTS value: '[{"name": "postgresql", "port": 5432}]' - name: PATRONI_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: PATRONI_POSTGRESQL_DATA_DIR value: "/var/lib/postgresql/data" - name: PATRONI_KUBERNETES_NAMESPACE value: ingress-nginx - name: PATRONI_KUBERNETES_LABELS value: "{app: test, cluster-name: test, release: test}" - name: PATRONI_SCOPE value: test - name: PGBACKREST_CONFIG value: /etc/pgbackrest/pgbackrest.conf # PGDATA and PGHOST are not required to let Patroni/PostgreSQL run correctly, # but for interactive sessions, callbacks and PostgreSQL tools they should be correct. - name: PGDATA value: "$(PATRONI_POSTGRESQL_DATA_DIR)" - name: PGHOST value: "/var/run/postgresql" - name: WALDIR value: "/var/lib/postgresql/wal/pg_wal" - name: BOOTSTRAP_FROM_BACKUP value: "0" - name: PGBACKREST_BACKUP_ENABLED value: "false" - name: TSTUNE_FILE value: /var/run/postgresql/timescaledb.conf # pgBackRest is also called using the archive_command if the backup is enabled. # this script will also need access to the environment variables specified for # the backup. This can be removed once we do not directly invoke pgBackRest # from inside the TimescaleDB container anymore envFrom: - secretRef: name: "test-credentials" optional: false - secretRef: name: "test-pgbackrest" optional: true ports: - containerPort: 8008 name: patroni - containerPort: 5432 name: postgresql readinessProbe: exec: command: - pg_isready - -h - /var/run/postgresql initialDelaySeconds: 5 periodSeconds: 30 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 volumeMounts: - name: storage-volume mountPath: "/var/lib/postgresql" subPath: "" - name: wal-volume mountPath: "/var/lib/postgresql/wal" subPath: "" - mountPath: /etc/timescaledb/patroni.yaml subPath: patroni.yaml name: patroni-config readOnly: true - mountPath: /etc/timescaledb/scripts name: timescaledb-scripts readOnly: true - mountPath: "/etc/timescaledb/post_init.d" name: post-init readOnly: true - mountPath: /etc/certificate name: certificate readOnly: true - name: socket-directory mountPath: /var/run/postgresql - mountPath: /etc/pgbackrest name: pgbackrest readOnly: true - mountPath: /etc/pgbackrest/bootstrap name: pgbackrest-bootstrap readOnly: true resources: limits: cpu: 1500m memory: 3Gi affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: topologyKey: "kubernetes.io/hostname" labelSelector: matchLabels: app: test release: "test" cluster-name: test - weight: 50 podAffinityTerm: topologyKey: failure-domain.beta.kubernetes.io/zone labelSelector: matchLabels: app: test release: "test" cluster-name: test volumes: - name: socket-directory emptyDir: {} - name: patroni-config configMap: name: test-patroni - name: timescaledb-scripts configMap: name: test-scripts defaultMode: 488 # 0750 permissions - name: post-init projected: defaultMode: 0750 sources: - configMap: name: custom-init-scripts optional: true - secret: name: custom-secret-scripts optional: true - name: pgbouncer configMap: name: test-pgbouncer defaultMode: 416 # 0640 permissions optional: true - name: pgbackrest configMap: name: test-pgbackrest defaultMode: 416 # 0640 permissions optional: true - name: certificate secret: secretName: "test-certificate" defaultMode: 416 # 0640 permissions - name: pgbackrest-bootstrap secret: secretName: pgbackrest-bootstrap optional: True volumeClaimTemplates: - metadata: name: storage-volume annotations: labels: app: test release: test heritage: Helm cluster-name: test purpose: data-directory spec: accessModes: - ReadWriteOnce resources: requests: storage: "2Gi" - metadata: name: wal-volume annotations: labels: app: test release: test heritage: Helm cluster-name: test purpose: wal-directory spec: accessModes: - ReadWriteOnce resources: requests: storage: "1Gi" --- # Source: timescaledb-single/templates/configmap-pgbackrest.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. --- # Source: timescaledb-single/templates/configmap-pgbouncer.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. --- # Source: timescaledb-single/templates/pgbackrest.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. --- # Source: timescaledb-single/templates/secret-certificate.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Secret metadata: name: "test-certificate" namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: certificates annotations: "helm.sh/hook": pre-install,post-delete "helm.sh/hook-weight": "0" type: kubernetes.io/tls stringData: tls.crt: "-----BEGIN CERTIFICATE-----\nMIIDCTCCAfGgAwIBAgIQEIpU+JPcONyMz3XUNlobwzANBgkqhkiG9w0BAQsFADAP\nMQ0wCwYDVQQDEwR0ZXN0MB4XDTIzMDEwNjA5MjY0OVoXDTI4MDEwNjA5MjY0OVow\nDzENMAsGA1UEAxMEdGVzdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB\nALsgzDGyfBGOF+HRpQZ9UDz1Bhh4PNgX39yt9140gJBSngahP91ryIDpoTK/Eqno\nYmlYpIDRSdlqBOXfBDMkSx9dmr9iGCab9FQCt+vKGh7VCVLnYMZ5d8UJKBF6G9Im\nTqlcjnhxty3rI0P9g9VCV8RPYwZD6Y0ZpkEn1PrCYr9h15NNn5EaZ0Fvl8kNB+AC\nkvmrnslGLMopO3p39uSK/doe79PybCjOshf7OySkHo2nzhZujz5A1vkfgGAmR0Lg\nghoy7y6PQMrIKqx1V+FHRer0hr9J1yfmdWH5Pkqm9buXHDfXo8vUmjOqF6lScGP5\n5wFeZfMhR0VnvfNLIkkkWZkCAwEAAaNhMF8wDgYDVR0PAQH/BAQDAgKkMB0GA1Ud\nJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud\nDgQWBBQmgu7WEVdalxPRhekI6iT/Vt58BDANBgkqhkiG9w0BAQsFAAOCAQEAhoEv\nWrK0kWxGrYY8fdHPmnPwl96g1p/+xEdlMQazmaIYsr4eT31zrqvUovC4c/sfyRvP\nTNmAttiVbFFJmhCCwD+YgT4EmMJ8tUW4RcgyCDN5gl1HLB9x4a46OMddQ3hEKDTm\nTFmRZYzs4d5I/q1849EUunanUXMDorQbiGzde179a7kAOhV+U3FoCA2oF5Zykrbv\nx24j+azPVJolCmBBJLOOl+4k5pPaqY4CGJU+jBxt8Dpuy9WxcC/z2RqqDRF4vnq1\neNHeJWeaRxz0vM1TTRkXAXBtzYLhhdCsq6q8DUjs1QtVoJxq0befpbKGD2pqNJoM\n7C+jrIU4EZD4+JeoVQ==\n-----END CERTIFICATE-----\n" tls.key: "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEAuyDMMbJ8EY4X4dGlBn1QPPUGGHg82Bff3K33XjSAkFKeBqE/\n3WvIgOmhMr8SqehiaVikgNFJ2WoE5d8EMyRLH12av2IYJpv0VAK368oaHtUJUudg\nxnl3xQkoEXob0iZOqVyOeHG3LesjQ/2D1UJXxE9jBkPpjRmmQSfU+sJiv2HXk02f\nkRpnQW+XyQ0H4AKS+aueyUYsyik7enf25Ir92h7v0/JsKM6yF/s7JKQejafOFm6P\nPkDW+R+AYCZHQuCCGjLvLo9AysgqrHVX4UdF6vSGv0nXJ+Z1Yfk+Sqb1u5ccN9ej\ny9SaM6oXqVJwY/nnAV5l8yFHRWe980siSSRZmQIDAQABAoIBACvTeZ9mEwK1ichc\npk7HyKQOKthOSMm/hbGUmOvaVgX3I4Wf/GoqVTJEBXnyIDfk8i+EEDsPSUF/QBhq\nS/yCUonNDXInUkqwmd+XJ2Y01jtEX8On5xV022UtSNIXDC8Cw8eMot14nJNHj+Hb\nnSW0PQQAJ8wO2cMvL63w20PDhQcXR0MXf+ySWxP7Wf5mDMhzSfgk551s3fVpLlMb\nCZahrGI4el9Pgg1+Y0hH7+uv3a94deLj2n+rPdhV0xCHC44no/FEDdXuExVTOn5y\nZL9r0anUfoTux5iWpG2+wuKmaN4VDrYkZK1Og0qHF+q2ej6cLTDniccDJiTkr94z\nQJ40UVECgYEAx2RYBhWLGBohRvaXm3IB7q88cOyJPHxFMrh0eiYdN8AvCJtxacBH\nMrStMYpIA/Bt3K6be0onRh5JgNqq9JZlxd0l+0J1aczCl8X9SW29q4Avs3a2xZso\nnaN3MxpDeBWTtzRBLcbW15N1RYMS8n9NWl/XUHyZfrEllE22gbxinqUCgYEA8EEg\nWLfqPy+89kcxCci3XZl+NembG6n82zy2TcQHSJRxnpsGyFESYlbb4r+BsPuf9clD\n6z+93/w3ILB17zL9OiWZvDuPMLduxNMj7SFrVIGVIqZeuOWVpaKZ+9I3XgSD+TM8\nnclLmyWYb1CwQa6YTgjn/euZSx819TpRg/R/sOUCgYEAmZs8FLPUDCVVLY4bDa2u\nv2pQbc5Li0VRKdngIZnrOF/d3AukO4vdTbrTEi8te5tlh3UcYsalqub6SUIsIXEb\nxmqwL/jq6y7LWpE0p7TbQZvnI6J4+5Kkn4ym779z6rb0rVacP9/G8xyuY3auyhI4\nTT84aNEUjv15rd6QkzHF5+ECgYEAkYXhIdvEdyFjQ4k7msGIz5j5aY5l9QuxrNnJ\nUrE5+Cxx5a/hG9R/XjFeXqnA1IKVETsneIbTa6hJe/Nme8xWtbGwvOMWiFuTLIT3\nbdqgOD+FJce/+B6X1gv3WSCriLcTeQ4f2TLkKMVM35/wItiuSBX870CVSXtOI4t/\ndH3UGkECgYAeHbIkz/vWVNeK8E8ed+UEP5g03cpdiSq3REo/2snzCKMqWbGjLleh\n4YGTE6APGPDG5rBnw4fHOpgh+5TnKpNvRKUgRvFRIhK3tJMc/FWbbSMOT8nlZYO6\nR24wNUisdt9JygTA3gGTg8XNkQXwOv27IF1c8W7/34CRAl0lfZHdjg==\n-----END RSA PRIVATE KEY-----\n" ... --- # Source: timescaledb-single/templates/secret-patroni.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Secret metadata: name: "test-credentials" namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: patroni annotations: "helm.sh/hook": pre-install,post-delete "helm.sh/hook-weight": "0" "helm.sh/resource-policy": keep type: Opaque stringData: PATRONI_SUPERUSER_PASSWORD: "nnj6z2vDU4gDxAn7" PATRONI_REPLICATION_PASSWORD: "VrT5cAjfmRTh5SLs" PATRONI_admin_PASSWORD: "by3lF1f0zFAvYHuE" ... --- # Source: timescaledb-single/templates/secret-pgbackrest.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: v1 kind: Secret metadata: name: "test-pgbackrest" namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: pgbackrest annotations: "helm.sh/hook": pre-install,post-delete "helm.sh/hook-weight": "0" "helm.sh/resource-policy": keep type: Opaque stringData: PGBACKREST_REPO1_S3_BUCKET: "" PGBACKREST_REPO1_S3_ENDPOINT: s3.amazonaws.com PGBACKREST_REPO1_S3_KEY: "" PGBACKREST_REPO1_S3_KEY_SECRET: "" PGBACKREST_REPO1_S3_REGION: "" ... --- # Source: timescaledb-single/templates/job-update-patroni.yaml # This file and its contents are licensed under the Apache License 2.0. # Please see the included NOTICE for copyright information and LICENSE for a copy of the license. apiVersion: batch/v1 kind: Job metadata: name: "test-patroni-6c" namespace: ingress-nginx labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 app.kubernetes.io/component: patroni annotations: "helm.sh/hook": post-upgrade "helm.sh/hook-delete-policy": hook-succeeded spec: activeDeadlineSeconds: 120 template: metadata: labels: app: test chart: timescaledb-single-0.27.4 release: test heritage: Helm cluster-name: test app.kubernetes.io/name: "test" app.kubernetes.io/version: 0.27.4 spec: restartPolicy: OnFailure containers: - name: test-patch-patroni-config image: curlimages/curl command: ["/bin/sh"] # Patching the Patroni configuration is good, however it should not block an upgrade from going through # Therefore we ensure we always exit with an exitcode 0, so that Helm is satisfied with this upgrade job args: - '-c' - | /usr/bin/curl --connect-timeout 30 --include --request PATCH --data \ "{\"loop_wait\":10,\"maximum_lag_on_failover\":33554432,\"postgresql\":{\"parameters\":{\"archive_command\":\"/etc/timescaledb/scripts/pgbackrest_archive.sh %p\",\"archive_mode\":\"on\",\"archive_timeout\":\"1800s\",\"autovacuum_analyze_scale_factor\":0.02,\"autovacuum_max_workers\":10,\"autovacuum_naptime\":\"5s\",\"autovacuum_vacuum_cost_limit\":500,\"autovacuum_vacuum_scale_factor\":0.05,\"hot_standby\":\"on\",\"log_autovacuum_min_duration\":\"1min\",\"log_checkpoints\":\"on\",\"log_connections\":\"on\",\"log_disconnections\":\"on\",\"log_line_prefix\":\"%t [%p]: [%c-%l] %u@%d,app=%a [%e] \",\"log_lock_waits\":\"on\",\"log_min_duration_statement\":\"1s\",\"log_statement\":\"ddl\",\"max_connections\":100,\"max_prepared_transactions\":150,\"shared_preload_libraries\":\"timescaledb,pg_stat_statements\",\"ssl\":\"on\",\"ssl_cert_file\":\"/etc/certificate/tls.crt\",\"ssl_key_file\":\"/etc/certificate/tls.key\",\"tcp_keepalives_idle\":900,\"tcp_keepalives_interval\":100,\"temp_file_limit\":\"1GB\",\"timescaledb.passfile\":\"../.pgpass\",\"unix_socket_directories\":\"/var/run/postgresql\",\"unix_socket_permissions\":\"0750\",\"wal_level\":\"hot_standby\",\"wal_log_hints\":\"on\"},\"use_pg_rewind\":true,\"use_slots\":true},\"retry_timeout\":10,\"ttl\":30}" \ "http://test-config:8008/config" exit 0 ``` Relevant part of STS copied from above: ```yaml - mountPath: /etc/pgbackrest/bootstrap name: pgbackrest-bootstrap readOnly: true resources: limits: cpu: 1500m memory: 3Gi affinity: ```

Irrelevant to the issue, but may by generally relevant to you:

If resources.requests IS NOT set and resources.limits IS set, the pod has its requests set to the value of limits

Keep in mind that not setting resources.requests while setting resources.limits will be treated by vanilla kubernetes as if requests == limits as per documentation - https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits


we would expect that the value of requests would be the default from the cluster.

Since kubernetes by default doesn't set any resource requests nor limits, this statement gives me a hint that you may be using some sort of admission controller to set resource limits/requests (most likely part of GKE autopilot). Judging by the correct output of helm template, it looks to me that this controller (if used) might be misbehaving.