apecloud / kubeblocks

KubeBlocks is an open-source control plane software that runs and manages databases, message queues and other stateful applications on K8s.
https://kubeblocks.io
GNU Affero General Public License v3.0
2.08k stars 170 forks source link

[BUG] pg cluster create failed:Kubernetes RBAC doesn't allow GET access to the 'kubernetes' endpoint in the 'default' namespace #3046

Closed JashBook closed 1 year ago

JashBook commented 1 year ago

Describe the bug pg cluster create failed:Kubernetes RBAC doesn't allow GET access to the 'kubernetes' endpoint in the 'default' namespace.

Warning Unhealthy 14m kubelet Readiness probe failed: {"event":"Failed","message":"error executing select pg_is_in_recovery();: failed to connect to host=localhost user=postgres database=postgres: dial error (dial tcp [::1]:5432: connect: connection refused)","originalRole":""}

➜  ~ kbcli version
Kubernetes: v1.24.6-aliyun.1
KubeBlocks: 0.5.0-beta.15
kbcli: 0.5.0-beta.15

To Reproduce Steps to reproduce the behavior:

  1. create cluster
    ➜  ~ kubectl apply  -f - << EOF                                                                                                             
    apiVersion: apps.kubeblocks.io/v1alpha1
    kind: Cluster
    metadata:
    name: postgresql-cluster
    spec:
    clusterDefinitionRef: postgresql
    clusterVersionRef: postgresql-12.14.0
    terminationPolicy: WipeOut
    componentSpecs:
    - name: postgresql
      componentDefRef: postgresql
      replicas: 2
      volumeClaimTemplates:
        - name: data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 300Gi
    EOF
  2. see cluster pod status
    
    ➜  ~ kubectl get cluster
    NAME                 CLUSTER-DEFINITION   VERSION              TERMINATION-POLICY   STATUS   AGE
    postgresql-cluster   postgresql           postgresql-12.14.0   WipeOut              Failed   15m
    ➜  ~ kubectl get pod,pvc,cm
    NAME                                  READY   STATUS    RESTARTS   AGE
    pod/postgresql-cluster-postgresql-0   3/4     Running   0          13m
    pod/postgresql-cluster-postgresql-1   3/4     Running   0          13m

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/data-postgresql-cluster-postgresql-0 Bound d-bp1cs832jejcrmglo7io 300Gi RWO alicloud-disk-essd 13m persistentvolumeclaim/data-postgresql-cluster-postgresql-1 Bound d-bp1bldgclx2els837vmv 300Gi RWO alicloud-disk-essd 13m

NAME DATA AGE configmap/kube-root-ca.crt 1 26m configmap/patroni-reload-script-postgresql-cluster 3 15m configmap/postgresql-cluster-postgresql-env 5 15m configmap/postgresql-cluster-postgresql-postgresql-configuration 4 15m configmap/postgresql-cluster-postgresql-postgresql-custom-metrics 1 15m configmap/postgresql-cluster-postgresql-postgresql-scripts 4 15m ➜ ~

3. See error
describe pod

➜ ~ kubectl describe pod postgresql-cluster-postgresql-0 Name: postgresql-cluster-postgresql-0 Namespace: default Priority: 0 Service Account: default Node: cn-hangzhou.192.168.0.112/192.168.0.112 Start Time: Thu, 04 May 2023 10:30:44 +0800 Labels: app.kubernetes.io/component=postgresql app.kubernetes.io/instance=postgresql-cluster app.kubernetes.io/managed-by=kubeblocks app.kubernetes.io/name=postgresql app.kubernetes.io/version=postgresql-12.14.0 apps.kubeblocks.io/component-name=postgresql apps.kubeblocks.io/workload-type=Replication apps.kubeblocks.postgres.patroni/scope=postgresql-cluster-postgresql-patroni controller-revision-hash=postgresql-cluster-postgresql-54469c8864 kubeblocks.io/role=primary statefulset.kubernetes.io/pod-name=postgresql-cluster-postgresql-0 Annotations: k8s.aliyun.com/pod-ips: 192.168.0.132 kubernetes.io/psp: ack.privileged Status: Running IP: 192.168.0.132 IPs: IP: 192.168.0.132 Controlled By: StatefulSet/postgresql-cluster-postgresql Init Containers: pg-init-container: Container ID: containerd://1d5e183eb490a25189321e4aff4b820bb3e781c6ccf8cf4f436730711f6ce9bc Image: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0 Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo@sha256:5e0b1211207b158ed43c109e5ff1be830e1bf5e7aff1f0dd3c90966804c5a143 Port: Host Port: Command: /kb-scripts/init_container.sh State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 04 May 2023 10:31:32 +0800 Finished: Thu, 04 May 2023 10:31:32 +0800 Ready: True Restart Count: 0 Environment Variables from: postgresql-cluster-postgresql-env ConfigMap Optional: false Environment: KB_POD_NAME: postgresql-cluster-postgresql-0 (v1:metadata.name) KB_NAMESPACE: default (v1:metadata.namespace) KB_SA_NAME: (v1:spec.serviceAccountName) KB_NODENAME: (v1:spec.nodeName) KB_HOST_IP: (v1:status.hostIP) KB_POD_IP: (v1:status.podIP) KB_POD_IPS: (v1:status.podIPs) KB_HOSTIP: (v1:status.hostIP) KB_PODIP: (v1:status.podIP) KB_PODIPS: (v1:status.podIPs) KB_CLUSTER_NAME: postgresql-cluster KB_COMP_NAME: postgresql KB_CLUSTER_COMP_NAME: postgresql-cluster-postgresql KB_POD_FQDN: $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc Mounts: /home/postgres/conf from postgresql-config (rw) /home/postgres/pgdata from data (rw) /kb-scripts from scripts (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cfzn9 (ro) Containers: postgresql: Container ID: containerd://9947bd034761a1f5409bcd3d7448cb77bc05704bb9ed375547a596618a37ec1f Image: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0 Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo@sha256:5e0b1211207b158ed43c109e5ff1be830e1bf5e7aff1f0dd3c90966804c5a143 Ports: 5432/TCP, 8008/TCP Host Ports: 0/TCP, 0/TCP Command: /kb-scripts/setup.sh State: Running Started: Thu, 04 May 2023 10:31:36 +0800 Ready: False Restart Count: 0 Readiness: exec [/bin/sh -c -ee exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432 [ -f /postgresql/tmp/.initialized ] || [ -f /postgresql/.initialized ] ] delay=25s timeout=5s period=30s #success=1 #failure=3 Environment Variables from: postgresql-cluster-postgresql-env ConfigMap Optional: false Environment: KB_POD_NAME: postgresql-cluster-postgresql-0 (v1:metadata.name) KB_NAMESPACE: default (v1:metadata.namespace) KB_SA_NAME: (v1:spec.serviceAccountName) KB_NODENAME: (v1:spec.nodeName) KB_HOST_IP: (v1:status.hostIP) KB_POD_IP: (v1:status.podIP) KB_POD_IPS: (v1:status.podIPs) KB_HOSTIP: (v1:status.hostIP) KB_PODIP: (v1:status.podIP) KB_PODIPS: (v1:status.podIPs) KB_CLUSTER_NAME: postgresql-cluster KB_COMP_NAME: postgresql KB_CLUSTER_COMP_NAME: postgresql-cluster-postgresql KB_POD_FQDN: $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc DCS_ENABLE_KUBERNETES_API: true KUBERNETES_USE_CONFIGMAPS: true SCOPE: $(KB_CLUSTER_NAME)-$(KB_COMP_NAME)-patroni KUBERNETES_SCOPE_LABEL: apps.kubeblocks.postgres.patroni/scope KUBERNETES_ROLE_LABEL: apps.kubeblocks.postgres.patroni/role KUBERNETES_LABELS: {"app.kubernetes.io/instance":"$(KB_CLUSTER_NAME)","apps.kubeblocks.io/component-name":"$(KB_COMP_NAME)"} RESTORE_DATA_DIR: /home/postgres/pgdata/kb_restore KB_PG_CONFIG_PATH: /home/postgres/conf/postgresql.conf SPILO_CONFIGURATION: bootstrap: initdb:

➜  ~ kubectl describe pod postgresql-cluster-postgresql-1
Name:             postgresql-cluster-postgresql-1
Namespace:        default
Priority:         0
Service Account:  default
Node:             cn-hangzhou.192.168.0.111/192.168.0.111
Start Time:       Thu, 04 May 2023 10:30:44 +0800
Labels:           app.kubernetes.io/component=postgresql
                  app.kubernetes.io/instance=postgresql-cluster
                  app.kubernetes.io/managed-by=kubeblocks
                  app.kubernetes.io/name=postgresql
                  app.kubernetes.io/version=postgresql-12.14.0
                  apps.kubeblocks.io/component-name=postgresql
                  apps.kubeblocks.io/workload-type=Replication
                  apps.kubeblocks.postgres.patroni/scope=postgresql-cluster-postgresql-patroni
                  controller-revision-hash=postgresql-cluster-postgresql-54469c8864
                  kubeblocks.io/role=secondary
                  statefulset.kubernetes.io/pod-name=postgresql-cluster-postgresql-1
Annotations:      k8s.aliyun.com/pod-ips: 192.168.0.141
                  kubernetes.io/psp: ack.privileged
Status:           Running
IP:               192.168.0.141
IPs:
  IP:           192.168.0.141
Controlled By:  StatefulSet/postgresql-cluster-postgresql
Init Containers:
  pg-init-container:
    Container ID:  containerd://46f2ca4bfff42f205cb9b2eee6f9819d6237acb3bcf0c158230a162cb5491adc
    Image:         registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0
    Image ID:      registry.cn-hangzhou.aliyuncs.com/apecloud/spilo@sha256:5e0b1211207b158ed43c109e5ff1be830e1bf5e7aff1f0dd3c90966804c5a143
    Port:          <none>
    Host Port:     <none>
    Command:
      /kb-scripts/init_container.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 04 May 2023 10:31:24 +0800
      Finished:     Thu, 04 May 2023 10:31:24 +0800
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      postgresql-cluster-postgresql-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:           postgresql-cluster-postgresql-1 (v1:metadata.name)
      KB_NAMESPACE:          default (v1:metadata.namespace)
      KB_SA_NAME:             (v1:spec.serviceAccountName)
      KB_NODENAME:            (v1:spec.nodeName)
      KB_HOST_IP:             (v1:status.hostIP)
      KB_POD_IP:              (v1:status.podIP)
      KB_POD_IPS:             (v1:status.podIPs)
      KB_HOSTIP:              (v1:status.hostIP)
      KB_PODIP:               (v1:status.podIP)
      KB_PODIPS:              (v1:status.podIPs)
      KB_CLUSTER_NAME:       postgresql-cluster
      KB_COMP_NAME:          postgresql
      KB_CLUSTER_COMP_NAME:  postgresql-cluster-postgresql
      KB_POD_FQDN:           $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
    Mounts:
      /home/postgres/conf from postgresql-config (rw)
      /home/postgres/pgdata from data (rw)
      /kb-scripts from scripts (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhd7z (ro)
Containers:
  postgresql:
    Container ID:  containerd://a3c0003fbd7c14bbbc1459c02b004cd566fd9bb604aebf56572c424fd3ff7275
    Image:         registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0
    Image ID:      registry.cn-hangzhou.aliyuncs.com/apecloud/spilo@sha256:5e0b1211207b158ed43c109e5ff1be830e1bf5e7aff1f0dd3c90966804c5a143
    Ports:         5432/TCP, 8008/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      /kb-scripts/setup.sh
    State:          Running
      Started:      Thu, 04 May 2023 10:31:28 +0800
    Ready:          False
    Restart Count:  0
    Readiness:      exec [/bin/sh -c -ee exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
[ -f /postgresql/tmp/.initialized ] || [ -f /postgresql/.initialized ]
] delay=25s timeout=5s period=30s #success=1 #failure=3
    Environment Variables from:
      postgresql-cluster-postgresql-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:                postgresql-cluster-postgresql-1 (v1:metadata.name)
      KB_NAMESPACE:               default (v1:metadata.namespace)
      KB_SA_NAME:                  (v1:spec.serviceAccountName)
      KB_NODENAME:                 (v1:spec.nodeName)
      KB_HOST_IP:                  (v1:status.hostIP)
      KB_POD_IP:                   (v1:status.podIP)
      KB_POD_IPS:                  (v1:status.podIPs)
      KB_HOSTIP:                   (v1:status.hostIP)
      KB_PODIP:                    (v1:status.podIP)
      KB_PODIPS:                   (v1:status.podIPs)
      KB_CLUSTER_NAME:            postgresql-cluster
      KB_COMP_NAME:               postgresql
      KB_CLUSTER_COMP_NAME:       postgresql-cluster-postgresql
      KB_POD_FQDN:                $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
      DCS_ENABLE_KUBERNETES_API:  true
      KUBERNETES_USE_CONFIGMAPS:  true
      SCOPE:                      $(KB_CLUSTER_NAME)-$(KB_COMP_NAME)-patroni
      KUBERNETES_SCOPE_LABEL:     apps.kubeblocks.postgres.patroni/scope
      KUBERNETES_ROLE_LABEL:      apps.kubeblocks.postgres.patroni/role
      KUBERNETES_LABELS:          {"app.kubernetes.io/instance":"$(KB_CLUSTER_NAME)","apps.kubeblocks.io/component-name":"$(KB_COMP_NAME)"}
      RESTORE_DATA_DIR:           /home/postgres/pgdata/kb_restore
      KB_PG_CONFIG_PATH:          /home/postgres/conf/postgresql.conf
      SPILO_CONFIGURATION:        bootstrap:
                                    initdb:
                                      - auth-host: md5
                                      - auth-local: trust

      ALLOW_NOSSL:                true
      PGROOT:                     /home/postgres/pgdata/pgroot
      POD_IP:                      (v1:status.podIP)
      POD_NAMESPACE:              default (v1:metadata.namespace)
      PGUSER_SUPERUSER:           <set to the key 'username' in secret 'postgresql-cluster-conn-credential'>  Optional: false
      PGPASSWORD_SUPERUSER:       <set to the key 'password' in secret 'postgresql-cluster-conn-credential'>  Optional: false
      PGUSER_ADMIN:               superadmin
      PGPASSWORD_ADMIN:           <set to the key 'password' in secret 'postgresql-cluster-conn-credential'>  Optional: false
      PGUSER_STANDBY:             standby
      PGPASSWORD_STANDBY:         <set to the key 'password' in secret 'postgresql-cluster-conn-credential'>  Optional: false
      PGUSER:                     <set to the key 'username' in secret 'postgresql-cluster-conn-credential'>  Optional: false
      PGPASSWORD:                 <set to the key 'password' in secret 'postgresql-cluster-conn-credential'>  Optional: false
    Mounts:
      /dev/shm from dshm (rw)
      /home/postgres/conf from postgresql-config (rw)
      /home/postgres/pgdata from data (rw)
      /kb-scripts from scripts (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhd7z (ro)
  metrics:
    Container ID:  containerd://1fafd0b07f57b4006e2291238192a9801b8f9daf37da817959cae35086383c6b
    Image:         registry.cn-hangzhou.aliyuncs.com/apecloud/postgres-exporter:0.11.1-debian-11-r66
    Image ID:      registry.cn-hangzhou.aliyuncs.com/apecloud/postgres-exporter@sha256:17c0bf751b9db5476a83a252caab6f26109a786b93fd83d4a73a2ea9c33e1e69
    Port:          9187/TCP
    Host Port:     0/TCP
    Command:
      /opt/bitnami/postgres-exporter/bin/postgres_exporter
      --auto-discover-databases
      --extend.query-path=/opt/conf/custom-metrics.yaml
      --exclude-databases=template0,template1
      --log.level=info
    State:          Running
      Started:      Thu, 04 May 2023 10:31:38 +0800
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:http-metrics/ delay=5s timeout=5s period=10s #success=1 #failure=6
    Readiness:      http-get http://:http-metrics/ delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment Variables from:
      postgresql-cluster-postgresql-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:           postgresql-cluster-postgresql-1 (v1:metadata.name)
      KB_NAMESPACE:          default (v1:metadata.namespace)
      KB_SA_NAME:             (v1:spec.serviceAccountName)
      KB_NODENAME:            (v1:spec.nodeName)
      KB_HOST_IP:             (v1:status.hostIP)
      KB_POD_IP:              (v1:status.podIP)
      KB_POD_IPS:             (v1:status.podIPs)
      KB_HOSTIP:              (v1:status.hostIP)
      KB_PODIP:               (v1:status.podIP)
      KB_PODIPS:              (v1:status.podIPs)
      KB_CLUSTER_NAME:       postgresql-cluster
      KB_COMP_NAME:          postgresql
      KB_CLUSTER_COMP_NAME:  postgresql-cluster-postgresql
      KB_POD_FQDN:           $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
      DATA_SOURCE_URI:       127.0.0.1:5432/postgres?sslmode=disable
      DATA_SOURCE_PASS:      <set to the key 'password' in secret 'postgresql-cluster-conn-credential'>  Optional: false
      DATA_SOURCE_USER:      <set to the key 'username' in secret 'postgresql-cluster-conn-credential'>  Optional: false
    Mounts:
      /opt/conf from postgresql-custom-metrics (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhd7z (ro)
  kb-checkrole:
    Container ID:  containerd://1bd6b58d22a336a74fb1a7b699a820f41e4503eed60e96499b0a8ae633a79293
    Image:         registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15
    Image ID:      registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools@sha256:c983538b5cf64e1ca5a55382067bee3bf2f275f6afe9c5c3eefd3caa141820a4
    Ports:         3501/TCP, 50001/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      probe
      --app-id
      batch-sdk
      --dapr-http-port
      3501
      --dapr-grpc-port
      50001
      --app-protocol
      http
      --log-level
      info
      --config
      /config/probe/config.yaml
      --components-path
      /config/probe/components
    State:          Running
      Started:      Thu, 04 May 2023 10:31:38 +0800
    Ready:          True
    Restart Count:  0
    Readiness:      exec [curl -X POST --max-time 1 --fail-with-body --silent -H Content-ComponentDefRef: application/json http://localhost:3501/v1.0/bindings/postgresql -d {"operation": "checkRole", "metadata":{"sql":""}}] delay=0s timeout=1s period=1s #success=1 #failure=2
    Startup:        tcp-socket :3501 delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      postgresql-cluster-postgresql-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:                postgresql-cluster-postgresql-1 (v1:metadata.name)
      KB_NAMESPACE:               default (v1:metadata.namespace)
      KB_SA_NAME:                  (v1:spec.serviceAccountName)
      KB_NODENAME:                 (v1:spec.nodeName)
      KB_HOST_IP:                  (v1:status.hostIP)
      KB_POD_IP:                   (v1:status.podIP)
      KB_POD_IPS:                  (v1:status.podIPs)
      KB_HOSTIP:                   (v1:status.hostIP)
      KB_PODIP:                    (v1:status.podIP)
      KB_PODIPS:                   (v1:status.podIPs)
      KB_CLUSTER_NAME:            postgresql-cluster
      KB_COMP_NAME:               postgresql
      KB_CLUSTER_COMP_NAME:       postgresql-cluster-postgresql
      KB_POD_FQDN:                $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
      KB_SERVICE_USER:            <set to the key 'username' in secret 'postgresql-cluster-conn-credential'>  Optional: false
      KB_SERVICE_PASSWORD:        <set to the key 'password' in secret 'postgresql-cluster-conn-credential'>  Optional: false
      KB_SERVICE_PORT:            5432
      KB_SERVICE_ROLES:           {}
      KB_SERVICE_CHARACTER_TYPE:  postgresql
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhd7z (ro)
  config-manager:
    Container ID:  containerd://a3a2311c3ca973957c6e8e01545989e2519606790e2d7fbe3c8773b15518881a
    Image:         registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15
    Image ID:      registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools@sha256:c983538b5cf64e1ca5a55382067bee3bf2f275f6afe9c5c3eefd3caa141820a4
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/reloader
    Args:
      --operator-update-enable
      --log-level
      info
      --tcp
      9901
      --notify-type
      tpl
      --tpl-config
      /opt/config/reload/reload.yaml
    State:          Running
      Started:      Thu, 04 May 2023 10:31:38 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      CONFIG_MANAGER_POD_IP:   (v1:status.podIP)
      DB_TYPE:                postgresql
    Mounts:
      /home/postgres/conf from postgresql-config (rw)
      /opt/config/reload from reload-manager-reload (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhd7z (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-postgresql-cluster-postgresql-1
    ReadOnly:   false
  dshm:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  postgresql-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      postgresql-cluster-postgresql-postgresql-configuration
    Optional:  false
  postgresql-custom-metrics:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      postgresql-cluster-postgresql-postgresql-custom-metrics
    Optional:  false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      postgresql-cluster-postgresql-postgresql-scripts
    Optional:  false
  reload-manager-reload:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      patroni-reload-script-postgresql-cluster
    Optional:  false
  kube-api-access-vhd7z:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 kb-data=true:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                     From                     Message
  ----     ------                  ----                    ----                     -------
  Warning  FailedScheduling        7m28s                   default-scheduler        0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  Normal   Scheduled               7m26s                   default-scheduler        Successfully assigned default/postgresql-cluster-postgresql-1 to cn-hangzhou.192.168.0.111
  Normal   SuccessfulAttachVolume  7m26s                   attachdetach-controller  AttachVolume.Attach succeeded for volume "d-bp1bldgclx2els837vmv"
  Normal   AllocIPSucceed          7m19s                   terway-daemon            Alloc IP 192.168.0.141/24
  Normal   Pulling                 7m19s                   kubelet                  Pulling image "registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0"
  Normal   Pulled                  6m47s                   kubelet                  Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0" in 31.749739821s (31.749749288s including waiting)
  Normal   Created                 6m47s                   kubelet                  Created container pg-init-container
  Normal   Started                 6m47s                   kubelet                  Started container pg-init-container
  Normal   Pulled                  6m43s                   kubelet                  Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0" already present on machine
  Normal   Created                 6m43s                   kubelet                  Created container postgresql
  Normal   Started                 6m43s                   kubelet                  Started container postgresql
  Normal   Pulling                 6m43s                   kubelet                  Pulling image "registry.cn-hangzhou.aliyuncs.com/apecloud/postgres-exporter:0.11.1-debian-11-r66"
  Normal   Pulled                  6m34s                   kubelet                  Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/apecloud/postgres-exporter:0.11.1-debian-11-r66" in 9.289644786s (9.289653531s including waiting)
  Normal   Created                 6m33s                   kubelet                  Created container metrics
  Normal   Started                 6m33s                   kubelet                  Started container metrics
  Normal   Pulled                  6m33s                   kubelet                  Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15" already present on machine
  Normal   Created                 6m33s                   kubelet                  Created container kb-checkrole
  Normal   Started                 6m33s                   kubelet                  Started container kb-checkrole
  Normal   Pulled                  6m33s                   kubelet                  Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15" already present on machine
  Normal   Created                 6m33s                   kubelet                  Created container config-manager
  Normal   Started                 6m33s                   kubelet                  Started container config-manager
  Warning  Unhealthy               6m28s                   kubelet                  Readiness probe failed: {"event":"Failed","message":"error executing select pg_is_in_recovery();: failed to connect to `host=localhost user=postgres database=postgres`: dial error (dial tcp [::1]:5432: connect: connection refused)","originalRole":""}
  Warning  Unhealthy               2m19s (x11 over 5m49s)  kubelet                  Readiness probe failed: 127.0.0.1:5432 - no response

pod logs

➜  ~ kubectl logs postgresql-cluster-postgresql-0
Defaulted container "postgresql" out of: postgresql, metrics, kb-checkrole, config-manager, pg-init-container (init)
+ KB_PRIMARY_POD_NAME_PREFIX=postgresql-cluster-postgresql-0
+ '[' postgresql-cluster-postgresql-0 '!=' postgresql-cluster-postgresql-0 ']'
+ '[' -f /home/postgres/pgdata/kb_restore/kb_restore.signal ']'
+ python3 /kb-scripts/generate_patroni_yaml.py tmp_patroni.yaml
++ cat tmp_patroni.yaml
+ export 'SPILO_CONFIGURATION=bootstrap:
  initdb:
  - auth-host: md5
  - auth-local: trust
postgresql:
  config_dir: /home/postgres/pgdata/conf
  custom_conf: /home/postgres/conf/postgresql.conf
  parameters:
    archive_command: wal_dir=/home/postgres/pgdata/pgroot/arcwal; wal_dir_today=${wal_dir}/$(date
      +%Y%m%d); [[ $(date +%H%M) == 1200 ]] && rm -rf ${wal_dir}/$(date -d"yesterday"
      +%Y%m%d); mkdir -p ${wal_dir_today} && gzip -kqc %p > ${wal_dir_today}/%f.gz
  pg_hba:
  - host     all             all             0.0.0.0/0               trust
  - host     all             all             ::/0                    trust
  - local    all             all                                     trust
  - host     all             all             127.0.0.1/32            trust
  - host     all             all             ::1/128                 trust
  - local     replication     all                                    trust
  - host      replication     all             0.0.0.0/0               md5
  - host      replication     all             ::/0                    md5'
+ SPILO_CONFIGURATION='bootstrap:
  initdb:
  - auth-host: md5
  - auth-local: trust
postgresql:
  config_dir: /home/postgres/pgdata/conf
  custom_conf: /home/postgres/conf/postgresql.conf
  parameters:
    archive_command: wal_dir=/home/postgres/pgdata/pgroot/arcwal; wal_dir_today=${wal_dir}/$(date
      +%Y%m%d); [[ $(date +%H%M) == 1200 ]] && rm -rf ${wal_dir}/$(date -d"yesterday"
      +%Y%m%d); mkdir -p ${wal_dir_today} && gzip -kqc %p > ${wal_dir_today}/%f.gz
  pg_hba:
  - host     all             all             0.0.0.0/0               trust
  - host     all             all             ::/0                    trust
  - local    all             all                                     trust
  - host     all             all             127.0.0.1/32            trust
  - host     all             all             ::1/128                 trust
  - local     replication     all                                    trust
  - host      replication     all             0.0.0.0/0               md5
  - host      replication     all             ::/0                    md5'
+ exec /launch.sh init
2023-05-04 02:31:37,208 - bootstrapping - INFO - Figuring out my environment (Google? AWS? Openstack? Local?)
2023-05-04 02:31:39,212 - bootstrapping - INFO - Could not connect to 169.254.169.254, assuming local Docker setup
2023-05-04 02:31:39,213 - bootstrapping - INFO - No meta-data available for this provider
2023-05-04 02:31:39,213 - bootstrapping - INFO - Looks like you are running local
2023-05-04 02:31:39,250 - bootstrapping - INFO - Configuring wal-e
2023-05-04 02:31:39,250 - bootstrapping - INFO - Configuring pgqd
2023-05-04 02:31:39,250 - bootstrapping - INFO - Configuring pam-oauth2
2023-05-04 02:31:39,250 - bootstrapping - INFO - No PAM_OAUTH2 configuration was specified, skipping
2023-05-04 02:31:39,250 - bootstrapping - INFO - Configuring log
2023-05-04 02:31:39,250 - bootstrapping - INFO - Configuring certificate
2023-05-04 02:31:39,250 - bootstrapping - INFO - Generating ssl self-signed certificate
2023-05-04 02:31:39,614 - bootstrapping - INFO - Configuring bootstrap
2023-05-04 02:31:39,615 - bootstrapping - INFO - Configuring standby-cluster
2023-05-04 02:31:39,615 - bootstrapping - INFO - Configuring patroni
2023-05-04 02:31:39,622 - bootstrapping - INFO - Writing to file /run/postgres.yml
2023-05-04 02:31:39,623 - bootstrapping - INFO - Configuring crontab
2023-05-04 02:31:39,623 - bootstrapping - INFO - Skipping creation of renice cron job due to lack of SYS_NICE capability
2023-05-04 02:31:39,623 - bootstrapping - INFO - Configuring pgbouncer
2023-05-04 02:31:39,623 - bootstrapping - INFO - No PGBOUNCER_CONFIGURATION was specified, skipping
bootstrap:
  dcs:
    postgresql:
      parameters:
        archive_mode: 'on'
        autovacuum_analyze_scale_factor: '0.05'
        autovacuum_max_workers: '1'
        autovacuum_vacuum_scale_factor: '0.1'
        checkpoint_completion_target: '0.95'
        hot_standby: 'on'
        log_autovacuum_min_duration: 1s
        log_checkpoints: 'True'
        log_lock_waits: 'True'
        log_min_duration_statement: '100'
        log_statement: ddl
        max_connections: '10000'
        max_replication_slots: '16'
        max_wal_senders: '24'
        track_functions: all
        wal_level: replica
        wal_log_hints: 'on'
  initdb:
  - auth-host: md5
  - auth-local: trust
postgresql:
  config_dir: /home/postgres/pgdata/conf
  custom_conf: /home/postgres/conf/postgresql.conf
  parameters:
    archive_command: wal_dir=/home/postgres/pgdata/pgroot/arcwal; wal_dir_today=${wal_dir}/$(date
      +%Y%m%d); [[ $(date +%H%M) == 1200 ]] && rm -rf ${wal_dir}/$(date -d"yesterday"
      +%Y%m%d); mkdir -p ${wal_dir_today} && gzip -kqc %p > ${wal_dir_today}/%f.gz
    pg_stat_statements.track_utility: 'False'
    shared_buffers: 1GB
  pg_hba:
  - host     all             all             0.0.0.0/0               trust
  - host     all             all             ::/0                    trust
  - local    all             all                                     trust
  - host     all             all             127.0.0.1/32            trust
  - host     all             all             ::1/128                 trust
  - local     replication     all                                    trust
  - host      replication     all             0.0.0.0/0               md5
  - host      replication     all             ::/0                    md5

2023-05-04 02:31:40,841 WARNING: Kubernetes RBAC doesn't allow GET access to the 'kubernetes' endpoint in the 'default' namespace. Disabling 'bypass_api_service'.
2023-05-04 02:31:41,859 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:41,861 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:42,864 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:42,866 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:43,868 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:43,870 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:44,873 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:44,874 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:45,879 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:45,879 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:46,885 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:46,885 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:47,891 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:47,891 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:48,897 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:48,897 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:49,902 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:49,903 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:50,845 ERROR: get_cluster
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/patroni/dcs/kubernetes.py", line 935, in __load_cluster
    self._wait_caches(stop_time)
  File "/usr/local/lib/python3.10/dist-packages/patroni/dcs/kubernetes.py", line 821, in _wait_caches
    raise RetryFailedError('Exceeded retry deadline')
patroni.utils.RetryFailedError: 'Exceeded retry deadline'
2023-05-04 02:31:50,846 WARNING: Can not get cluster from dcs
2023-05-04 02:31:50,908 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:50,909 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:51,914 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:51,915 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:52,918 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:52,920 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:53,924 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:53,924 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:54,929 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:54,929 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:55,935 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:55,936 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:56,941 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:56,941 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:57,947 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:57,948 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:58,952 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:58,953 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:59,957 ERROR: ObjectCache.run ApiException()
2023-05-04 02:31:59,958 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:00,963 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:00,964 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:01,969 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:01,969 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:02,974 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:02,975 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:03,980 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:03,981 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:04,986 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:04,986 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:05,851 ERROR: get_cluster
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/patroni/dcs/kubernetes.py", line 935, in __load_cluster
    self._wait_caches(stop_time)
  File "/usr/local/lib/python3.10/dist-packages/patroni/dcs/kubernetes.py", line 821, in _wait_caches
    raise RetryFailedError('Exceeded retry deadline')
patroni.utils.RetryFailedError: 'Exceeded retry deadline'
2023-05-04 02:32:05,851 WARNING: Can not get cluster from dcs
2023-05-04 02:32:05,991 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:05,992 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:06,997 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:06,997 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:08,003 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:08,004 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:09,009 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:09,010 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:10,013 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:10,015 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:11,019 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:11,019 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:12,025 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:12,026 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:13,030 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:13,031 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:14,036 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:14,037 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:15,041 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:15,042 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:16,046 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:16,047 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:17,052 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:17,053 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:18,057 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:18,059 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:19,063 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:19,064 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:20,068 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:20,069 ERROR: ObjectCache.run ApiException()
2023-05-04 02:32:20,856 ERROR: get_cluster
...
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/patroni/dcs/kubernetes.py", line 935, in __load_cluster
    self._wait_caches(stop_time)
  File "/usr/local/lib/python3.10/dist-packages/patroni/dcs/kubernetes.py", line 821, in _wait_caches
    raise RetryFailedError('Exceeded retry deadline')
patroni.utils.RetryFailedError: 'Exceeded retry deadline'
2023-05-04 02:34:05,891 WARNING: Can not get cluster from dcs
2023-05-04 02:34:06,640 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:06,642 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:07,645 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:07,647 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:08,651 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:08,653 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:09,655 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:09,658 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:10,660 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:10,664 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:11,664 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:11,668 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:12,669 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:12,672 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:13,674 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:13,678 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:14,680 ERROR: ObjectCache.run ApiException()
2023-05-04 02:34:14,683 ERROR: ObjectCache.run ApiException()
➜  ~ kubectl logs postgresql-cluster-postgresql-1
Defaulted container "postgresql" out of: postgresql, metrics, kb-checkrole, config-manager, pg-init-container (init)
+ KB_PRIMARY_POD_NAME_PREFIX=postgresql-cluster-postgresql-0
+ '[' postgresql-cluster-postgresql-0 '!=' postgresql-cluster-postgresql-1 ']'
+ pg_isready -U postgres -h postgresql-cluster-postgresql-0.postgresql-cluster-postgresql-headless -p 5432
postgresql-cluster-postgresql-0.postgresql-cluster-postgresql-headless:5432 - no response
+ sleep 5
+ pg_isready -U postgres -h postgresql-cluster-postgresql-0.postgresql-cluster-postgresql-headless -p 5432
+ sleep 5
postgresql-cluster-postgresql-0.postgresql-cluster-postgresql-headless:5432 - no response
+ pg_isready -U postgres -h postgresql-cluster-postgresql-0.postgresql-cluster-postgresql-headless -p 5432
...
+ sleep 5
+ pg_isready -U postgres -h postgresql-cluster-postgresql-0.postgresql-cluster-postgresql-headless -p 5432
postgresql-cluster-postgresql-0.postgresql-cluster-postgresql-headless:5432 - no response
+ sleep 5
+ pg_isready -U postgres -h postgresql-cluster-postgresql-0.postgresql-cluster-postgresql-headless -p 5432
postgresql-cluster-postgresql-0.postgresql-cluster-postgresql-headless:5432 - no response

Expected behavior pg cluster create succeed.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Additional context Add any other context about the problem here.

Y-Rookie commented 1 year ago

There are currently three ways to create a pg cluster:

  1. Create a pg cluster through helm and kbcli, and the required rbac resources (serviceAccount, role, rolebinding) will be automatically created. Each cluster corresponds to a set of rbac resources.
  2. If it is created by kubectl apply, you need to manually create rbac resources (serviceAccount, role, rolebinding), and you need to specify a specific serviceAccountName on the cluster yaml
Y-Rookie commented 1 year ago
kind: Cluster
  name: postgres
  namespace: default
spec:
  clusterDefinitionRef: postgresql
  clusterVersionRef: postgresql-14.7.1
  componentSpecs:
  - componentDefRef: postgresql
    enabledLogs:
    - running
    monitor: false
    name: postgresql
    primaryIndex: 0
    replicas: 2
    serviceAccountName: kb-sa-postgres
    switchPolicy:
      type: Noop
    volumeClaimTemplates:
    - name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  terminationPolicy: Delete
Y-Rookie commented 1 year ago
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    meta.helm.sh/release-name: postgres
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2023-05-01T14:27:21Z"
  labels:
    app.kubernetes.io/instance: postgres
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: pgcluster
    app.kubernetes.io/version: 14.7.1
    helm.sh/chart: pgcluster-0.5.0-alpha.8
  name: kb-sa-postgres
  namespace: default
  resourceVersion: "248822"
  uid: 52ebd87d-4c07-40ad-8b33-e441d3c5349c
secrets:
- name: kb-sa-postgres-token-j7442
Y-Rookie commented 1 year ago
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
    meta.helm.sh/release-name: postgres
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2023-05-01T14:27:21Z"
  labels:
    app.kubernetes.io/instance: postgres
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: pgcluster
    app.kubernetes.io/version: 14.7.1
    helm.sh/chart: pgcluster-0.5.0-alpha.8
  name: kb-role-default-postgres
  namespace: default
  resourceVersion: "248818"
  uid: 776ecc77-e31b-41b0-a857-306d3e9abe13
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
  - get
  - list
  - patch
  - update
  - watch
  - delete
- apiGroups:
  - ""
  resources:
  - endpoints
  verbs:
  - get
  - patch
  - update
  - create
  - list
  - watch
  - delete
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - patch
  - update
  - watch
Y-Rookie commented 1 year ago
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  annotations:
    meta.helm.sh/release-name: postgres
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2023-05-01T14:27:21Z"
  labels:
    app.kubernetes.io/instance: postgres
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: pgcluster
    app.kubernetes.io/version: 14.7.1
    helm.sh/chart: pgcluster-0.5.0-alpha.8
  name: kb-rolebinding-default-postgres
  namespace: default
  resourceVersion: "248819"
  uid: 43790de9-8bd1-4bd8-b627-0d7446783f31
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kb-role-default-postgres
subjects:
- kind: ServiceAccount
  name: kb-sa-postgres
  namespace: default
Y-Rookie commented 1 year ago

No more recurrence, close first