timescale / helm-charts

Configuration and Documentation to run TimescaleDB in your Kubernetes cluster
Apache License 2.0
263 stars 223 forks source link

Replication issue on standby nodes #589

Open ehteshaamkazi opened 1 year ago

ehteshaamkazi commented 1 year ago

We deploy a 3 node tsdb cluster using timescale/timescaledb-ha:pg14.4-ts2.7.2-latest image on AKS ( "kubernetesVersion": "1.25.5",) alongwith patroni (2.1.4) using helm chart v0.11.0 with istio-proxy sidecar (docker.io/istio/proxyv2:1.15.0 ). The standby nodes do not replicate wal with errors as below for replication user authentication after deployment. /home/postgres/.pgpass.patroni has different passwords on different nodes. The user is common across different environments. patronictl reinit on node does not help here.

POD LOG

2023-02-24 10:29:31 UTC [90806]: [63f8918b.162b6-1] [unknown]@[unknown],app=[unknown] [00000] LOG:  connection received: host=127.0.0.1 port=60606
2023-02-24 10:29:31 UTC [90806]: [63f8918b.162b6-2] standby@[unknown],app=[unknown] [28P01] FATAL:  password authentication failed for user "standby"
2023-02-24 10:29:31 UTC [90806]: [63f8918b.162b6-3] standby@[unknown],app=[unknown] [28P01] DETAIL:  Password does not match for user "standby".
    Connection matched pg_hba.conf line 8: "hostssl   replication     standby            all                md5"
2023-02-24 10:29:31 UTC [90807]: [63f8918b.162b7-1] [unknown]@[unknown],app=[unknown] [00000] LOG:  connection received: host=127.0.0.1 port=60614
2023-02-24 10:29:31 UTC [90807]: [63f8918b.162b7-2] standby@[unknown],app=[unknown] [28000] FATAL:  pg_hba.conf rejects replication connection for host "127.0.0.1", user "standby", no encryption
2023-02-24 10:29:31,652 ERROR: Can not fetch local timeline and lsn from replication connection
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/patroni/postgresql/__init__.py", line 850, in get_replica_timeline
    with self.get_replication_connection_cursor(**self.config.local_replication_address) as cur:
  File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/usr/lib/python3/dist-packages/patroni/postgresql/__init__.py", line 845, in get_replication_connection_cursor
    with get_connection_cursor(**conn_kwargs) as cur:
  File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/usr/lib/python3/dist-packages/patroni/postgresql/connection.py", line 44, in get_connection_cursor
    conn = psycopg.connect(**kwargs)
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "localhost" (::1), port 5432 failed: Connection refused
    Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL:  password authentication failed for user "standby"
connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL:  pg_hba.conf rejects replication connection for host "127.0.0.1", user "standby", no encryption

NODE INFORMATION


CxxXN1xxxx5M:~ exxxi$ kubectl describe node aks-workers-34xxxxx1-vmss00005u 
Name:               aks-workers-34xxxxx1-vmss00005u
Roles:              agent
Labels:             agentpool=workers
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=Standard_D8ds_v5
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=eastus
                    failure-domain.beta.kubernetes.io/zone=0
                    kubernetes.azure.com/agentpool=workers
                    kubernetes.azure.com/cluster=rg-sxxxxxxg-01-aks-nodes
                    kubernetes.azure.com/kubelet-identity-client-id=672xxx0a-8xx6-4xx3-axx4-a8c5xxxx5294
                    kubernetes.azure.com/mode=system
                    kubernetes.azure.com/node-image-version=AKSUbuntu-2204gen2containerd-2023.02.15
                    kubernetes.azure.com/os-sku=Ubuntu
                    kubernetes.azure.com/role=agent
                    kubernetes.azure.com/storageprofile=managed
                    kubernetes.azure.com/storagetier=Premium_LRS
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=aks-workers-34xxxxx1-vmss00005u
                    kubernetes.io/os=linux
                    kubernetes.io/role=agent
                    node-role.kubernetes.io/agent=
                    node.kubernetes.io/instance-type=Standard_D8ds_v5
                    storageprofile=managed
                    storagetier=Premium_LRS
                    topology.disk.csi.azure.com/zone=
                    topology.kubernetes.io/region=eastus
                    topology.kubernetes.io/zone=0
Annotations:        csi.volume.kubernetes.io/nodeid:
                      {"disk.csi.azure.com":"aks-workers-34xxxxx1-vmss00005u","file.csi.azure.com":"aks-workers-34xxxxx1-vmss00005u"}
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 24 Feb 2023 00:30:12 +0530
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  aks-workers-34xxxxx1-vmss00005u
  AcquireTime:     <unset>
  RenewTime:       Fri, 10 Mar 2023 20:27:49 +0530
Conditions:
  Type                          Status  LastHeartbeatTime                 LastTransitionTime                Reason                          Message
  ----                          ------  -----------------                 ------------------                ------                          -------
  ReadonlyFilesystem            False   Fri, 10 Mar 2023 20:27:13 +0530   Fri, 24 Feb 2023 00:30:52 +0530   FilesystemIsNotReadOnly         Filesystem is not read-only
  FrequentDockerRestart         False   Fri, 10 Mar 2023 20:27:13 +0530   Fri, 24 Feb 2023 00:30:52 +0530   NoFrequentDockerRestart         docker is functioning properly
  KubeletProblem                False   Fri, 10 Mar 2023 20:27:13 +0530   Fri, 24 Feb 2023 00:30:52 +0530   KubeletIsUp                     kubelet service is up
  KernelDeadlock                False   Fri, 10 Mar 2023 20:27:13 +0530   Fri, 24 Feb 2023 00:30:52 +0530   KernelHasNoDeadlock             kernel has no deadlock
  FrequentKubeletRestart        False   Fri, 10 Mar 2023 20:27:13 +0530   Fri, 24 Feb 2023 00:30:52 +0530   NoFrequentKubeletRestart        kubelet is functioning properly
  FrequentContainerdRestart     False   Fri, 10 Mar 2023 20:27:13 +0530   Fri, 24 Feb 2023 00:30:52 +0530   NoFrequentContainerdRestart     containerd is functioning properly
  ContainerRuntimeProblem       False   Fri, 10 Mar 2023 20:27:13 +0530   Fri, 24 Feb 2023 00:30:52 +0530   ContainerRuntimeIsUp            container runtime service is up
  FrequentUnregisterNetDevice   False   Fri, 10 Mar 2023 20:27:13 +0530   Fri, 24 Feb 2023 00:30:52 +0530   NoFrequentUnregisterNetDevice   node is functioning properly
  VMEventScheduled              False   Fri, 10 Mar 2023 20:27:13 +0530   Mon, 06 Mar 2023 15:25:56 +0530   NoVMEventScheduled              VM has no scheduled event
  FilesystemCorruptionProblem   False   Fri, 10 Mar 2023 20:27:13 +0530   Fri, 24 Feb 2023 00:30:52 +0530   FilesystemIsOK                  Filesystem is healthy
  NetworkUnavailable            False   Fri, 24 Feb 2023 00:32:05 +0530   Fri, 24 Feb 2023 00:32:05 +0530   RouteCreated                    RouteController created a route
  MemoryPressure                False   Fri, 10 Mar 2023 20:27:43 +0530   Fri, 24 Feb 2023 00:30:12 +0530   KubeletHasSufficientMemory      kubelet has sufficient memory available
  DiskPressure                  False   Fri, 10 Mar 2023 20:27:43 +0530   Fri, 24 Feb 2023 00:30:12 +0530   KubeletHasNoDiskPressure        kubelet has no disk pressure
  PIDPressure                   False   Fri, 10 Mar 2023 20:27:43 +0530   Fri, 24 Feb 2023 00:30:12 +0530   KubeletHasSufficientPID         kubelet has sufficient PID available
  Ready                         True    Fri, 10 Mar 2023 20:27:43 +0530   Fri, 24 Feb 2023 00:30:22 +0530   KubeletReady                    kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.xx3.0.x
  Hostname:    aks-workers-34xxxxx1-vmss00005u
Capacity:
  cpu:                8
  ephemeral-storage:  129886128Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32875412Ki
  pods:               110
Allocatable:
  cpu:                7820m
  ephemeral-storage:  119703055367
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             28374932Ki
  pods:               110
System Info:
  Machine ID:                                     13x6d1xxxxd84c9d99cxxxxxbcxx127e
  System UUID:                                    ebxxxxb5-exxa-4xx9-a1xx-5488xxxx51a5
  Boot ID:                                        27xxxxx4-cxx1-4xx3-85x6-dexxxxx6c35e
  Kernel Version:                                 5.15.0-1033-azure
  OS Image:                                       Ubuntu 22.04.1 LTS
  Operating System:                               linux
  Architecture:                                   amd64
  Container Runtime Version:                      containerd://1.6.17+azure-1
  Kubelet Version:                                v1.25.5
  Kube-Proxy Version:                             v1.25.5
PodCIDR:                                          10.2xx.x.0/24
PodCIDRs:                                         10.2xx.x.0/24
ProviderID:                                       azure:///subscriptions/970xxxx0-4xxa-4xxc-9x1e-e20xxxxx166c/resourceGroups/rg-sxxxxxxg-01-aks-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-workers-34xxxxx1-vmss/virtualMachines/210
Non-terminated Pods:                              (38 in total)
  Namespace                                       Name                                                               CPU Requests  CPU Limits   Memory Requests  Memory Limits  Age
  ---------                                       ----                                                               ------------  ----------   ---------------  -------------  ---
  argo                                            argo-server-5f8c7f9d46-647rk                                       50m (0%)      100m (1%)    50Mi (0%)        100Mi (0%)     29h
  cost-service                                    azure-provider-service-6dfbd4f8dc-x69tj                            0 (0%)        0 (0%)       0 (0%)           0 (0%)         14d
  cost-service                                    kube-cost-service-7d756bf89b-ppt42                                 0 (0%)        0 (0%)       0 (0%)           0 (0%)         14d
  gatekeeper-system                               gatekeeper-controller-6df4b76cbc-gnmcv                             100m (1%)     2 (25%)      256Mi (0%)       2Gi (7%)       29h
 .
 .
 .

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests       Limits
  --------           --------       ------
  cpu                5995m (76%)    70420m (900%)
  memory             12692Mi (45%)  50172Mi (181%)
  ephemeral-storage  0 (0%)         0 (0%)
  hugepages-1Gi      0 (0%)         0 (0%)
  hugepages-2Mi      0 (0%)         0 (0%)
Events:              <none>

POD DESCRIBE


Name:             txxxxxxle-tixxxxxxb-1
Namespace:        oxxxxxabixxxy-sxxxxr
Priority:         0
Service Account:  txxxxxale-txxxxxxxb
Node:             aks-wxxxxrs-3xxxxx41-vmsxxxxxxu/10.xxx.0.x
Start Time:       Fri, 24 Feb 2023 00:32:11 +0530
Labels:           app=txxxxxale-txxxxxxxb
                  cluster-name=timescale
                  controller-revision-hash=txxxxxale-txxxxxxxb-574b74ff68
                  release=timescale
                  role=replica
                  security.istio.io/tlsMode=istio
                  service.istio.io/canonical-name=txxxxxale-txxxxxxxb
                  service.istio.io/canonical-revision=latest
                  statefulset.kubernetes.io/pod-name=txxxxxxle-tixxxxxxb-1
Annotations:      kubectl.kubernetes.io/default-container: timescaledb
                  kubectl.kubernetes.io/default-logs-container: timescaledb
                  prometheus.io/path: /stats/prometheus
                  prometheus.io/port: 15020
                  prometheus.io/scrape: true
                  sidecar.istio.io/status:
                    {"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["workload-socket","credential-socket","workload-certs","istio-env...
                  status:
                    {"conn_url":"postgres://10.xxx.8.xx:5432/postgres","api_url":"http://10.xxx.8.xx:8008/patroni","state":"running","role":"replica","version...
Status:           Running
IP:               10.xxx.8.xx
IPs:
  IP:           10.xxx.8.xx
Controlled By:  StatefulSet/txxxxxale-txxxxxxxb
Init Containers:
  tstune:
    Container ID:  containerd://ca30a373230cab77653b635f2afc39854fc6e8e9f88d8bac6386ca36a1da55db
    Image:         timescale/timescaledb-ha:pg14.4-ts2.7.2-latest
    Image ID:      docker.io/timescale/timescaledb-ha@sha256:20fc8832891933a9ebacbc0de34c03cf82d65511fe197163d0b8d741261837ca
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      set -e
      [ $CPUS -eq 0 ]   && CPUS="${RESOURCES_CPU_LIMIT}"
      [ $MEMORY -eq 0 ] && MEMORY="${RESOURCES_MEMORY_LIMIT}"

      if [ -f "${PGDATA}/postgresql.base.conf" ] && ! grep "${INCLUDE_DIRECTIVE}" postgresql.base.conf -qxF; then
        echo "${INCLUDE_DIRECTIVE}" >> "${PGDATA}/postgresql.base.conf"
      fi

      touch "${TSTUNE_FILE}"
      timescaledb-tune -quiet -pg-version 11 -conf-path "${TSTUNE_FILE}" -cpus "${CPUS}" -memory "${MEMORY}MB" \
         -yes

      # If there is a dedicated WAL Volume, we want to set max_wal_size to 60% of that volume
      # If there isn't a dedicated WAL Volume, we set it to 20% of the data volume
      if [ "${RESOURCES_WAL_VOLUME}" = "0" ]; then
        WALMAX="${RESOURCES_DATA_VOLUME}"
        WALPERCENT=20
      else
        WALMAX="${RESOURCES_WAL_VOLUME}"
        WALPERCENT=60
      fi

      WALMAX=$(numfmt --from=auto ${WALMAX})

      # Wal segments are 16MB in size, in this way we get a "nice" number of the nearest
      # 16MB
      WALMAX=$(( $WALMAX / 100 * $WALPERCENT / 16777216 * 16 ))
      WALMIN=$(( $WALMAX / 2 ))

      echo "max_wal_size=${WALMAX}MB" >> "${TSTUNE_FILE}"
      echo "min_wal_size=${WALMIN}MB" >> "${TSTUNE_FILE}"

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 24 Feb 2023 00:36:37 +0530
      Finished:     Fri, 24 Feb 2023 00:36:37 +0530
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     900m
      memory:  1536Mi
    Requests:
      cpu:     100m
      memory:  128Mi
    Environment:
      TSTUNE_FILE:             /var/run/postgresql/timescaledb.conf
      RESOURCES_WAL_VOLUME:    200Gi
      RESOURCES_DATA_VOLUME:   200Gi
      INCLUDE_DIRECTIVE:       include_if_exists = '/var/run/postgresql/timescaledb.conf'
      CPUS:                    1 (requests.cpu)
      MEMORY:                  128 (requests.memory)
      RESOURCES_CPU_LIMIT:     1 (limits.cpu)
      RESOURCES_MEMORY_LIMIT:  1536 (limits.memory)
    Mounts:
      /etc/timescaledb/post_init.d from dbscript (rw)
      /var/run/postgresql from socket-directory (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sws9z (ro)
  istio-init:
    Container ID:  containerd://4b1eeee6ba917a85c43fee1236c88b77255182d321e246e7d34f8b2a17179e09
    Image:         docker.io/istio/proxyv2:1.15.0
    Image ID:      docker.io/istio/proxyv2@sha256:0201788b1550dd95cbf7d7075c939dd581169e715699d8e8f85ed2a5f6b35cd2
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      *
      -d
      15090,15021,15020
      --log_output_level=default:info
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 24 Feb 2023 00:36:50 +0530
      Finished:     Fri, 24 Feb 2023 00:36:50 +0530
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sws9z (ro)
Containers:
  timescaledb:
    Container ID:  containerd://2bc817f33e9a4e9b96efea2a814a10d394375b990ce934cb642a56bfd440798a
    Image:         timescale/timescaledb-ha:pg14.4-ts2.7.2-latest
    Image ID:      docker.io/timescale/timescaledb-ha@sha256:20fc8832891933a9ebacbc0de34c03cf82d65511fe197163d0b8d741261837ca
    Ports:         8008/TCP, 5432/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      /bin/bash
      -c

      install -o postgres -g postgres -d -m 0700 "/var/lib/postgresql/data" "/var/lib/postgresql/wal/pg_wal" || exit 1
      TABLESPACES=""
      for tablespace in ; do
        install -o postgres -g postgres -d -m 0700 "/var/lib/postgresql/tablespaces/${tablespace}/data"
      done

      # Environment variables can be read by regular users of PostgreSQL. Especially in a Kubernetes
      # context it is likely that some secrets are part of those variables.
      # To ensure we expose as little as possible to the underlying PostgreSQL instance, we have a list
      # of allowed environment variable patterns to retain.
      #
      # We need the KUBERNETES_ environment variables for the native Kubernetes support of Patroni to work.
      #
      # NB: Patroni will remove all PATRONI_.* environment variables before starting PostgreSQL

      # We store the current environment, as initscripts, callbacks, archive_commands etc. may require
      # to have the environment available to them
      set -o posix
      export -p > "${HOME}/.pod_environment"
      export -p | grep PGBACKREST > "${HOME}/.pgbackrest_environment"

      for UNKNOWNVAR in $(env | awk -F '=' '!/^(PATRONI_.*|HOME|PGDATA|PGHOST|LC_.*|LANG|PATH|KUBERNETES_SERVICE_.*)=/ {print $1}')
      do
          unset "${UNKNOWNVAR}"
      done

      touch /var/run/postgresql/timescaledb.conf
      touch /var/run/postgresql/wal_status

      echo "*:*:*:postgres:${PATRONI_SUPERUSER_PASSWORD}" >> ${HOME}/.pgpass
      chmod 0600 ${HOME}/.pgpass

      export PATRONI_POSTGRESQL_PGPASS="${HOME}/.pgpass.patroni"

      exec patroni /etc/timescaledb/patroni.yaml

    State:          Running
      Started:      Fri, 24 Feb 2023 00:37:06 +0530
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     900m
      memory:  1536Mi
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  exec [pg_isready -h /var/run/postgresql] delay=5s timeout=5s period=30s #success=1 #failure=6
    Environment Variables from:
      timescale-credentials  Secret  Optional: false
      timescale-pgbackrest   Secret  Optional: true
    Environment:
      PATRONI_admin_OPTIONS:               createrole,createdb
      PATRONI_REPLICATION_USERNAME:        standby
      PATRONI_KUBERNETES_POD_IP:            (v1:status.podIP)
      PATRONI_POSTGRESQL_CONNECT_ADDRESS:  $(PATRONI_KUBERNETES_POD_IP):5432
      PATRONI_RESTAPI_CONNECT_ADDRESS:     $(PATRONI_KUBERNETES_POD_IP):8008
      PATRONI_KUBERNETES_PORTS:            [{"name": "postgresql", "port": 5432}]
      PATRONI_NAME:                        txxxxxxle-tixxxxxxb-1 (v1:metadata.name)
      PATRONI_POSTGRESQL_DATA_DIR:         /var/lib/postgresql/data
      PATRONI_KUBERNETES_NAMESPACE:        oxxxxxabixxxy-sxxxxr
      PATRONI_KUBERNETES_LABELS:           {app: txxxxxale-txxxxxxxb, cluster-name: timescale, release: timescale}
      PATRONI_SCOPE:                       timescale
      PGBACKREST_CONFIG:                   /etc/pgbackrest/pgbackrest.conf
      PGDATA:                              $(PATRONI_POSTGRESQL_DATA_DIR)
      PGHOST:                              /var/run/postgresql
      BOOTSTRAP_FROM_BACKUP:               0
    Mounts:
      /etc/certificate from certificate (ro)
      /etc/pgbackrest from pgbackrest (ro)
      /etc/pgbackrest/bootstrap from pgbackrest-bootstrap (ro)
      /etc/timescaledb/patroni.yaml from patroni-config (ro,path="patroni.yaml")
      /etc/timescaledb/post_init.d from dbscript (ro)
      /etc/timescaledb/scripts from timescaledb-scripts (ro)
      /var/lib/postgresql from storage-volume (rw)
      /var/lib/postgresql/wal from wal-volume (rw)
      /var/run/postgresql from socket-directory (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sws9z (ro)
  istio-proxy:
    Container ID:  containerd://eb5d2e38ea640fbd178de8e20a410ef2e4b5b6d1232cc340990dfc943b10cbbc
    Image:         docker.io/istio/proxyv2:1.15.0
    Image ID:      docker.io/istio/proxyv2@sha256:0201788b1550dd95cbf7d7075c939dd581169e715699d8e8f85ed2a5f6b35cd2
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --concurrency
      2
    State:          Running
      Started:      Fri, 24 Feb 2023 00:37:06 +0530
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                    third-party-jwt
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       cert-manager-istio-csr.cert-manager.svc:443
      POD_NAME:                      txxxxxxle-tixxxxxxb-1 (v1:metadata.name)
      POD_NAMESPACE:                 oxxxxxabixxxy-sxxxxr (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      PROXY_CONFIG:                  {}

      ISTIO_META_POD_PORTS:          [
                                         {"name":"patroni","containerPort":8008,"protocol":"TCP"}
                                         ,{"name":"postgresql","containerPort":5432,"protocol":"TCP"}
                                     ]
      ISTIO_META_APP_CONTAINERS:     timescaledb
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_META_WORKLOAD_NAME:      txxxxxale-txxxxxxxb
      ISTIO_META_OWNER:              kubernetes://apis/apps/v1/namespaces/oxxxxxabixxxy-sxxxxr/statefulsets/txxxxxale-txxxxxxxb
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local
    Mounts:
      /etc/istio/pod from istio-podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/credential-uds from credential-socket (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sws9z (ro)
      /var/run/secrets/tokens from istio-token (rw)
      /var/run/secrets/workload-spiffe-credentials from workload-certs (rw)
      /var/run/secrets/workload-spiffe-uds from workload-socket (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  workload-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  credential-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  workload-certs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  istio-podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  storage-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  storage-volume-txxxxxxle-tixxxxxxb-1
    ReadOnly:   false
  wal-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  wal-volume-txxxxxxle-tixxxxxxb-1
    ReadOnly:   false
  socket-directory:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  patroni-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      txxxxxale-txxxxxxxb-patroni
    Optional:  false
  timescaledb-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      txxxxxale-txxxxxxxb-scripts
    Optional:  false
  dbscript:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      txxxxxale-txxxxxxxb-dbscripts
    Optional:  false
  post-init:
    Type:                Projected (a volume that contains injected data from multiple sources)
    ConfigMapName:       custom-init-scripts
    ConfigMapOptional:   0xc000cf12b9
    SecretName:          custom-secret-scripts
    SecretOptionalName:  0xc000cf12ba
  pgbouncer:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      txxxxxale-txxxxxxxb-pgbouncer
    Optional:  true
  pgbackrest:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      txxxxxale-txxxxxxxb-pgbackrest
    Optional:  true
  certificate:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  timescale-certificate
    Optional:    false
  pgbackrest-bootstrap:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  pgbackrest-bootstrap
    Optional:    true
  kube-api-access-sws9z:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
nikolic-milan commented 1 year ago

This is also an issue on chart version 0.33.1.

mshivanna commented 1 year ago

anybody found a solution for this issue?

ehteshaamkazi commented 1 year ago

I do not see this issue anymore in our deployments. Our current chart version of timescale is 0.33.2 and timescaledb-ha:pg14.6-ts2.9.1-p1 .