➜ ~ kbcli version
Kubernetes: v1.25.7-gke.1000
KubeBlocks: 0.5.0-beta.15
kbcli: 0.5.0-beta.15
Create cluster with 2 replicas
➜ kbcli git:(main) ✗ kbcli cluster create yjtest1 --termination-policy=WipeOut --node-labels '"dev=true"' --tolerations '"key=dev,value=true,operator=Equal,effect=NoSchedule","key=large,value=true,operator=Equal,effect=NoSchedule"' --monitor=false --enable-all-logs=false --cluster-definition=postgresql --set cpu=100m,memory=0.5Gi,replicas=2,storage=1Gi --namespace kubeblocks
Info: --cluster-version is not specified, ClusterVersion postgresql-12.14.0 is applied by default
Cluster yjtest1 created
configure
➜ ~ kbcli cluster configure yjtest1 --component postgresql --config-spec postgresql-configuration --config-file postgresql.conf --set shared_buffers=512MB --namespace kubeblocks
Warning: The parameter change you modified needs to be restarted, which may cause the cluster to be unavailable for a period of time. Do you need to continue...
Please type "yes" to confirm: yes
Will updated configure file meta:
ConfigSpec: postgresql-configuration ConfigFile: postgresql.conf ComponentName: postgresql ClusterName: yjtest1
OpsRequest yjtest1-reconfiguring-4nf9k created successfully, you can view the progress:
kbcli cluster describe-ops yjtest1-reconfiguring-4nf9k -n kubeblocks
Hscale to 4 replicas
➜ ~ kbcli cluster hscale yjtest1 --components postgresql --replicas 4 --namespace kubeblocks
Please type the name again(separate with white space when more than one): yjtest1
OpsRequest yjtest1-horizontalscaling-xn7bq created successfully, you can view the progress:
kbcli cluster describe-ops yjtest1-horizontalscaling-xn7bq -n kubeblocks
Cluster is abnormal and the new hscaled pods cannot connected
➜ ~ k get pod -n kubeblocks |grep yjtest1
yjtest1-postgresql-0 4/4 Running 0 17m
yjtest1-postgresql-1 4/4 Running 0 17m
yjtest1-postgresql-2 3/4 Running 0 2m6s
yjtest1-postgresql-3 3/4 Running 0 2m5s
➜ ~ kbcli cluster list -n kubeblocks
NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME
yjtest kubeblocks postgresql postgresql-12.14.0 WipeOut Abnormal May 03,2023 12:07 UTC+0800
yjtest1 kubeblocks postgresql postgresql-12.14.0 WipeOut Abnormal May 03,2023 12:31 UTC+0800
➜ ~ k describe cluster yjtest1 -n kubeblocks
Name: yjtest1
Namespace: kubeblocks
Labels: clusterdefinition.kubeblocks.io/name=postgresql
clusterversion.kubeblocks.io/name=postgresql-12.14.0
Annotations:
API Version: apps.kubeblocks.io/v1alpha1
Kind: Cluster
Metadata:
Creation Timestamp: 2023-05-03T04:31:06Z
Finalizers:
cluster.kubeblocks.io/finalizer
Generation: 3
Managed Fields:
API Version: apps.kubeblocks.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:spec:
.:
f:affinity:
.:
f:nodeLabels:
.:
f:dev:
f:podAntiAffinity:
f:tenancy:
f:clusterDefinitionRef:
f:clusterVersionRef:
f:componentSpecs:
.:
k:{"name":"postgresql"}:
.:
f:componentDefRef:
f:monitor:
f:name:
f:resources:
.:
f:limits:
.:
f:cpu:
f:memory:
f:requests:
.:
f:cpu:
f:memory:
f:serviceAccountName:
f:switchPolicy:
.:
f:type:
f:volumeClaimTemplates:
f:terminationPolicy:
f:tolerations:
Manager: kbcli
Operation: Update
Time: 2023-05-03T04:31:06Z
API Version: apps.kubeblocks.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"cluster.kubeblocks.io/finalizer":
f:labels:
.:
f:clusterdefinition.kubeblocks.io/name:
f:clusterversion.kubeblocks.io/name:
f:spec:
f:componentSpecs:
k:{"name":"postgresql"}:
f:replicas:
Manager: manager
Operation: Update
Time: 2023-05-03T04:49:17Z
API Version: apps.kubeblocks.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:clusterDefGeneration:
f:components:
.:
f:postgresql:
.:
f:message:
.:
f:Pod/yjtest1-postgresql-2:
f:Pod/yjtest1-postgresql-3:
f:phase:
f:podsReady:
f:replicationSetStatus:
.:
f:primary:
.:
f:pod:
f:secondaries:
f:conditions:
f:observedGeneration:
f:phase:
Manager: manager
Operation: Update
Subresource: status
Time: 2023-05-03T04:50:58Z
Resource Version: 1015119
UID: 8ad81b45-e297-460e-9a14-741b6e80ab17
Spec:
Affinity:
Node Labels:
Dev: true
Pod Anti Affinity: Preferred
Tenancy: SharedNode
Cluster Definition Ref: postgresql
Cluster Version Ref: postgresql-12.14.0
Component Specs:
Component Def Ref: postgresql
Monitor: false
Name: postgresql
Replicas: 4
Resources:
Limits:
Cpu: 100m
Memory: 512Mi
Requests:
Cpu: 100m
Memory: 512Mi
Service Account Name: kb-sa-yjtest1
Switch Policy:
Type: Noop
Volume Claim Templates:
Name: data
Spec:
Access Modes:
ReadWriteOnce
Resources:
Requests:
Storage: 1Gi
Termination Policy: WipeOut
Tolerations:
Effect: NoSchedule
Key: dev
Operator: Equal
Value: true
Effect: NoSchedule
Key: large
Operator: Equal
Value: true
Status:
Cluster Def Generation: 2
Components:
Postgresql:
Message:
Pod/yjtest1-postgresql-2: Readiness probe failed: 127.0.0.1:5432 - no response
;
Pod/yjtest1-postgresql-3: Readiness probe failed: 127.0.0.1:5432 - no response
;
Phase: Abnormal
Pods Ready: false
Replication Set Status:
Primary:
Pod: yjtest1-postgresql-0
Secondaries:
Pod: yjtest1-postgresql-1
Pod: yjtest1-postgresql-2
Pod: yjtest1-postgresql-3
Conditions:
Last Transition Time: 2023-05-03T04:50:57Z
Message: HorizontalScaling opsRequest: yjtest1-horizontalscaling-xn7bq has been processed
Reason: Processed
Status: True
Type: LatestOpsRequestProcessed
Last Transition Time: 2023-05-03T04:31:06Z
Message: The operator has started the provisioning of Cluster: yjtest1
Observed Generation: 3
Reason: PreCheckSucceed
Status: True
Type: ProvisioningStarted
Last Transition Time: 2023-05-03T04:33:36Z
Message: Successfully applied for resources
Observed Generation: 3
Reason: ApplyResourcesSucceed
Status: True
Type: ApplyResources
Last Transition Time: 2023-05-03T04:49:18Z
Message: pods are not ready in Components: [postgresql], refer to related component message in Cluster.status.components
Reason: ReplicasNotReady
Status: False
Type: ReplicasReady
Last Transition Time: 2023-05-03T04:49:18Z
Message: pods are unavailable in Components: [postgresql], refer to related component message in Cluster.status.components
Reason: ComponentsNotReady
Status: False
Type: Ready
Observed Generation: 3
Phase: Abnormal
Events:
Type Reason Age From Message
Warning Unhealthy 18m event-controller Pod yjtest1-postgresql-1: Readiness probe failed: 127.0.0.1:5432 - no response
Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbdataprotection
Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbreplicator
Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbadmin
Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbmonitoring
Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbprobe
Normal Reconfiguring 18m ops-request-controller Start to process the Reconfiguring opsRequest "yjtest1-reconfiguring-4nf9k" in Cluster: yjtest1
Warning ApplyResourcesFailed 18m cluster-controller Operation cannot be fulfilled on statefulsets.apps "yjtest1-postgresql": the object has been modified; please apply your changes to the latest version and try again
Warning ComponentsNotReady 18m cluster-controller pods are unavailable in Components: [postgresql], refer to related component message in Cluster.status.components
Warning ReplicasNotReady 18m (x2 over 18m) cluster-controller pods are not ready in Components: [postgresql], refer to related component message in Cluster.status.components
Warning ApplyResourcesFailed 18m (x7 over 20m) cluster-controller the number of current replicationSet primary obj is not 1, pls check
Normal ApplyResourcesSucceed 18m (x4 over 20m) cluster-controller Successfully applied for resources
Normal Processed 17m cluster-controller Reconfiguring opsRequest: yjtest1-reconfiguring-4nf9k has been processed
Normal ClusterReady 16m (x2 over 18m) cluster-controller Cluster: yjtest1 is ready, current phase is Running
Normal AllReplicasReady 16m (x3 over 18m) cluster-controller all pods of components are ready, waiting for the probe detection successful
Normal Running 16m (x2 over 18m) cluster-controller Cluster: yjtest1 is ready, current phase is Running
Normal PreCheckSucceed 2m30s (x2 over 20m) cluster-controller The operator has started the provisioning of Cluster: yjtest1
Normal HorizontalScaling 2m30s ops-request-controller Start to process the HorizontalScaling opsRequest "yjtest1-horizontalscaling-xn7bq" in Cluster: yjtest1
Warning HorizontalScaling 2m30s cluster-controller HorizontalScaling opsRequest: yjtest1-horizontalscaling-xn7bq is processing
Normal HorizontalScale 2m30s (x2 over 2m30s) cluster-controller Start horizontal scale component postgresql from 2 to 4
Warning Unhealthy 50s event-controller Pod yjtest1-postgresql-2: Readiness probe failed: 127.0.0.1:5432 - no response
Warning Unhealthy 49s event-controller Pod yjtest1-postgresql-3: Readiness probe failed: 127.0.0.1:5432 - no response
➜ ~ k logs yjtest1-postgresql-2 -n kubeblocks
Defaulted container "postgresql" out of: postgresql, metrics, kb-checkrole, config-manager, pg-init-container (init)
2023-05-03 04:49:40,652 WARNING: Kubernetes RBAC doesn't allow GET access to the 'kubernetes' endpoint in the 'default' namespace. Disabling 'bypass_api_service'.
2023-05-03 04:49:41,248 INFO: No PostgreSQL configuration items changed, nothing to reload.
2023-05-03 04:49:41,343 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:49:41,543 INFO: trying to bootstrap from leader 'yjtest1-postgresql-0'
1024+0 records in
1024+0 records out
16777216 bytes (17 MB, 16 MiB) copied, 0.690255 s, 24.3 MB/s
2023-05-03 04:49:49,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:49:49,145 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
NOTICE: base backup done, waiting for required WAL segments to be archived
2023-05-03 04:49:59,092 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:49:59,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:50:09,090 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:50:09,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:50:19,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:50:19,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:50:29,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:50:29,089 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:50:39,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:50:39,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
WARNING: still waiting for all required WAL segments to be archived (60 seconds elapsed)
HINT: Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.
2023-05-03 04:50:49,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:50:49,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:50:59,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:50:59,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:51:09,090 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:51:09,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:51:19,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:51:19,089 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:51:29,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:51:29,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:51:39,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:51:39,089 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
WARNING: still waiting for all required WAL segments to be archived (120 seconds elapsed)
HINT: Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments.
2023-05-03 04:51:49,092 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:51:49,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:51:59,092 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:51:59,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:52:09,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:52:09,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:52:19,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:52:19,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
2023-05-03 04:52:29,090 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2
2023-05-03 04:52:29,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
ALLOW_NOSSL: true
PGROOT: /home/postgres/pgdata/pgroot
POD_IP: (v1:status.podIP)
POD_NAMESPACE: kubeblocks (v1:metadata.namespace)
PGUSER_SUPERUSER: <set to the key 'username' in secret 'yjtest1-conn-credential'> Optional: false
PGPASSWORD_SUPERUSER: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false
PGUSER_ADMIN: superadmin
PGPASSWORD_ADMIN: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false
PGUSER_STANDBY: standby
PGPASSWORD_STANDBY: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false
PGUSER: <set to the key 'username' in secret 'yjtest1-conn-credential'> Optional: false
PGPASSWORD: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false
Mounts:
/dev/shm from dshm (rw)
/home/postgres/conf from postgresql-config (rw)
/home/postgres/pgdata from data (rw)
/kb-scripts from scripts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wtr (ro)
metrics:
Container ID: containerd://d80231015c6ec23b3e70e569dc7c0a57a71f554131a528cedfd7120b2bf79d2f
Image: registry.cn-hangzhou.aliyuncs.com/apecloud/postgres-exporter:0.11.1-debian-11-r66
Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/postgres-exporter@sha256:17c0bf751b9db5476a83a252caab6f26109a786b93fd83d4a73a2ea9c33e1e69
Port: 9187/TCP
Host Port: 0/TCP
Command:
/opt/bitnami/postgres-exporter/bin/postgres_exporter
--auto-discover-databases
--extend.query-path=/opt/conf/custom-metrics.yaml
--exclude-databases=template0,template1
--log.level=info
State: Running
Started: Wed, 03 May 2023 12:49:29 +0800
Ready: True
Restart Count: 0
Liveness: http-get http://:http-metrics/ delay=5s timeout=5s period=10s #success=1 #failure=6
Readiness: http-get http://:http-metrics/ delay=5s timeout=5s period=10s #success=1 #failure=6
Environment Variables from:
yjtest1-postgresql-env ConfigMap Optional: false
Environment:
KB_POD_NAME: yjtest1-postgresql-2 (v1:metadata.name)
KB_NAMESPACE: kubeblocks (v1:metadata.namespace)
KB_SA_NAME: (v1:spec.serviceAccountName)
KB_NODENAME: (v1:spec.nodeName)
KB_HOST_IP: (v1:status.hostIP)
KB_POD_IP: (v1:status.podIP)
KB_POD_IPS: (v1:status.podIPs)
KB_HOSTIP: (v1:status.hostIP)
KB_PODIP: (v1:status.podIP)
KB_PODIPS: (v1:status.podIPs)
KB_CLUSTER_NAME: yjtest1
KB_COMP_NAME: postgresql
KB_CLUSTER_COMP_NAME: yjtest1-postgresql
KB_POD_FQDN: $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
DATA_SOURCE_URI: 127.0.0.1:5432/postgres?sslmode=disable
DATA_SOURCE_PASS: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false
DATA_SOURCE_USER: <set to the key 'username' in secret 'yjtest1-conn-credential'> Optional: false
Mounts:
/opt/conf from postgresql-custom-metrics (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wtr (ro)
kb-checkrole:
Container ID: containerd://d14c922a7a2661ac79cc4c8d42dfa545717d53622c6f5ed1e607620ed7befc1c
Image: registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15
Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools@sha256:c983538b5cf64e1ca5a55382067bee3bf2f275f6afe9c5c3eefd3caa141820a4
Ports: 3501/TCP, 50001/TCP
Host Ports: 0/TCP, 0/TCP
Command:
probe
--app-id
batch-sdk
--dapr-http-port
3501
--dapr-grpc-port
50001
--app-protocol
http
--log-level
info
--config
/config/probe/config.yaml
--components-path
/config/probe/components
State: Running
Started: Wed, 03 May 2023 12:49:29 +0800
Ready: True
Restart Count: 0
Readiness: exec [curl -X POST --max-time 1 --fail-with-body --silent -H Content-ComponentDefRef: application/json http://localhost:3501/v1.0/bindings/postgresql -d {"operation": "checkRole", "metadata":{"sql":""}}] delay=0s timeout=1s period=1s #success=1 #failure=2
Startup: tcp-socket :3501 delay=0s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
yjtest1-postgresql-env ConfigMap Optional: false
Environment:
KB_POD_NAME: yjtest1-postgresql-2 (v1:metadata.name)
KB_NAMESPACE: kubeblocks (v1:metadata.namespace)
KB_SA_NAME: (v1:spec.serviceAccountName)
KB_NODENAME: (v1:spec.nodeName)
KB_HOST_IP: (v1:status.hostIP)
KB_POD_IP: (v1:status.podIP)
KB_POD_IPS: (v1:status.podIPs)
KB_HOSTIP: (v1:status.hostIP)
KB_PODIP: (v1:status.podIP)
KB_PODIPS: (v1:status.podIPs)
KB_CLUSTER_NAME: yjtest1
KB_COMP_NAME: postgresql
KB_CLUSTER_COMP_NAME: yjtest1-postgresql
KB_POD_FQDN: $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
KB_SERVICE_USER: <set to the key 'username' in secret 'yjtest1-conn-credential'> Optional: false
KB_SERVICE_PASSWORD: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false
KB_SERVICE_PORT: 5432
KB_SERVICE_ROLES: {}
KB_SERVICE_CHARACTER_TYPE: postgresql
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wtr (ro)
config-manager:
Container ID: containerd://91d2b821ac8706265b5ed155dbe7529ad96e0e5fa22691fccb9021a134474975
Image: registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15
Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools@sha256:c983538b5cf64e1ca5a55382067bee3bf2f275f6afe9c5c3eefd3caa141820a4
Port:
Host Port:
Command:
/bin/reloader
Args:
--operator-update-enable
--log-level
info
--tcp
9901
--notify-type
tpl
--tpl-config
/opt/config/reload/reload.yaml
State: Running
Started: Wed, 03 May 2023 12:49:29 +0800
Ready: True
Restart Count: 0
Environment:
CONFIG_MANAGER_POD_IP: (v1:status.podIP)
DB_TYPE: postgresql
Mounts:
/home/postgres/conf from postgresql-config (rw)
/opt/config/reload from reload-manager-reload (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wtr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-yjtest1-postgresql-2
ReadOnly: false
dshm:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit:
postgresql-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: yjtest1-postgresql-postgresql-configuration
Optional: false
postgresql-custom-metrics:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: yjtest1-postgresql-postgresql-custom-metrics
Optional: false
scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: yjtest1-postgresql-postgresql-scripts
Optional: false
reload-manager-reload:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: patroni-reload-script-yjtest1
Optional: false
kube-api-access-96wtr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: dev=true:NoSchedule
kb-data=true:NoSchedule
large=true:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 3m54s default-scheduler Successfully assigned kubeblocks/yjtest1-postgresql-2 to gke-yjtest-default-pool-ee024711-n5m4
Normal SuccessfulAttachVolume 3m50s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-b2a8d70f-c41f-4314-b363-ece94cdd4e69"
Normal Pulled 3m48s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0" already present on machine
Normal Created 3m48s kubelet Created container pg-init-container
Normal Started 3m48s kubelet Started container pg-init-container
Normal Pulled 3m48s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0" already present on machine
Normal Created 3m48s kubelet Created container postgresql
Normal Created 3m47s kubelet Created container metrics
Normal Pulled 3m47s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/postgres-exporter:0.11.1-debian-11-r66" already present on machine
Normal Started 3m47s kubelet Started container postgresql
Normal Started 3m47s kubelet Started container metrics
Normal Pulled 3m47s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15" already present on machine
Normal Created 3m47s kubelet Created container kb-checkrole
Normal Started 3m47s kubelet Started container kb-checkrole
Normal Pulled 3m47s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15" already present on machine
Normal Created 3m47s kubelet Created container config-manager
Normal Started 3m47s kubelet Started container config-manager
Warning Unhealthy 3m38s kubelet Readiness probe failed: {"event":"Failed","message":"error executing select pg_is_in_recovery();: failed to connect to host=localhost user=postgres database=postgres: dial error (dial tcp [::1]:5432: connect: cannot assign requested address)","originalRole":""}
Warning Unhealthy 18s (x9 over 3m18s) kubelet Readiness probe failed: 127.0.0.1:5432 - no response
➜ ~ k exec -it yjtest1-postgresql-2 -n kubeblocks sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulted container "postgresql" out of: postgresql, metrics, kb-checkrole, config-manager, pg-init-container (init)
psql
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
exit
command terminated with exit code 2
➜ ~ k get cm yjtest1-postgresql-postgresql-configuration -n kubeblocks -o yaml
apiVersion: v1
data:
kb_pitr.conf: |
method: kb_restore_from_time
kb_restore_from_time:
command: bash /home/postgres/pgdata/kb_restore/kb_restore.sh
keep_existing_recovery_conf: false
recovery_conf: {}
kb_restore.conf: |
method: kb_restore_from_backup
kb_restore_from_backup:
command: bash /home/postgres/pgdata/kb_restore/kb_restore.sh
keep_existing_recovery_conf: false
recovery_conf:
restore_command: cp /home/postgres/pgdata/pgroot/arch/%f %p
recovery_target_timeline: latest
pg_hba.conf: |
host all all 0.0.0.0/0 trust
host all all ::/0 trust
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
local replication all trust
host replication all 0.0.0.0/0 md5
host replication all ::/0 md5
postgresql.conf: |
➜ ~ kbcli version Kubernetes: v1.25.7-gke.1000 KubeBlocks: 0.5.0-beta.15 kbcli: 0.5.0-beta.15
➜ ~ kbcli cluster list -n kubeblocks NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME yjtest kubeblocks postgresql postgresql-12.14.0 WipeOut Abnormal May 03,2023 12:07 UTC+0800 yjtest1 kubeblocks postgresql postgresql-12.14.0 WipeOut Abnormal May 03,2023 12:31 UTC+0800
➜ ~ k describe cluster yjtest1 -n kubeblocks Name: yjtest1 Namespace: kubeblocks Labels: clusterdefinition.kubeblocks.io/name=postgresql clusterversion.kubeblocks.io/name=postgresql-12.14.0 Annotations:
API Version: apps.kubeblocks.io/v1alpha1
Kind: Cluster
Metadata:
Creation Timestamp: 2023-05-03T04:31:06Z
Finalizers:
cluster.kubeblocks.io/finalizer
Generation: 3
Managed Fields:
API Version: apps.kubeblocks.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:spec:
.:
f:affinity:
.:
f:nodeLabels:
.:
f:dev:
f:podAntiAffinity:
f:tenancy:
f:clusterDefinitionRef:
f:clusterVersionRef:
f:componentSpecs:
.:
k:{"name":"postgresql"}:
.:
f:componentDefRef:
f:monitor:
f:name:
f:resources:
.:
f:limits:
.:
f:cpu:
f:memory:
f:requests:
.:
f:cpu:
f:memory:
f:serviceAccountName:
f:switchPolicy:
.:
f:type:
f:volumeClaimTemplates:
f:terminationPolicy:
f:tolerations:
Manager: kbcli
Operation: Update
Time: 2023-05-03T04:31:06Z
API Version: apps.kubeblocks.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"cluster.kubeblocks.io/finalizer":
f:labels:
.:
f:clusterdefinition.kubeblocks.io/name:
f:clusterversion.kubeblocks.io/name:
f:spec:
f:componentSpecs:
k:{"name":"postgresql"}:
f:replicas:
Manager: manager
Operation: Update
Time: 2023-05-03T04:49:17Z
API Version: apps.kubeblocks.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:clusterDefGeneration:
f:components:
.:
f:postgresql:
.:
f:message:
.:
f:Pod/yjtest1-postgresql-2:
f:Pod/yjtest1-postgresql-3:
f:phase:
f:podsReady:
f:replicationSetStatus:
.:
f:primary:
.:
f:pod:
f:secondaries:
f:conditions:
f:observedGeneration:
f:phase:
Manager: manager
Operation: Update
Subresource: status
Time: 2023-05-03T04:50:58Z
Resource Version: 1015119
UID: 8ad81b45-e297-460e-9a14-741b6e80ab17
Spec:
Affinity:
Node Labels:
Dev: true
Pod Anti Affinity: Preferred
Tenancy: SharedNode
Cluster Definition Ref: postgresql
Cluster Version Ref: postgresql-12.14.0
Component Specs:
Component Def Ref: postgresql
Monitor: false
Name: postgresql
Replicas: 4
Resources:
Limits:
Cpu: 100m
Memory: 512Mi
Requests:
Cpu: 100m
Memory: 512Mi
Service Account Name: kb-sa-yjtest1
Switch Policy:
Type: Noop
Volume Claim Templates:
Name: data
Spec:
Access Modes:
ReadWriteOnce
Resources:
Requests:
Storage: 1Gi
Termination Policy: WipeOut
Tolerations:
Effect: NoSchedule
Key: dev
Operator: Equal
Value: true
Effect: NoSchedule
Key: large
Operator: Equal
Value: true
Status:
Cluster Def Generation: 2
Components:
Postgresql:
Message:
Pod/yjtest1-postgresql-2: Readiness probe failed: 127.0.0.1:5432 - no response
;
Pod/yjtest1-postgresql-3: Readiness probe failed: 127.0.0.1:5432 - no response
;
Phase: Abnormal
Pods Ready: false
Replication Set Status:
Primary:
Pod: yjtest1-postgresql-0
Secondaries:
Pod: yjtest1-postgresql-1
Pod: yjtest1-postgresql-2
Pod: yjtest1-postgresql-3
Conditions:
Last Transition Time: 2023-05-03T04:50:57Z
Message: HorizontalScaling opsRequest: yjtest1-horizontalscaling-xn7bq has been processed
Reason: Processed
Status: True
Type: LatestOpsRequestProcessed
Last Transition Time: 2023-05-03T04:31:06Z
Message: The operator has started the provisioning of Cluster: yjtest1
Observed Generation: 3
Reason: PreCheckSucceed
Status: True
Type: ProvisioningStarted
Last Transition Time: 2023-05-03T04:33:36Z
Message: Successfully applied for resources
Observed Generation: 3
Reason: ApplyResourcesSucceed
Status: True
Type: ApplyResources
Last Transition Time: 2023-05-03T04:49:18Z
Message: pods are not ready in Components: [postgresql], refer to related component message in Cluster.status.components
Reason: ReplicasNotReady
Status: False
Type: ReplicasReady
Last Transition Time: 2023-05-03T04:49:18Z
Message: pods are unavailable in Components: [postgresql], refer to related component message in Cluster.status.components
Reason: ComponentsNotReady
Status: False
Type: Ready
Observed Generation: 3
Phase: Abnormal
Events:
Type Reason Age From Message
Warning Unhealthy 18m event-controller Pod yjtest1-postgresql-1: Readiness probe failed: 127.0.0.1:5432 - no response Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbdataprotection Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbreplicator Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbadmin Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbmonitoring Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbprobe Normal Reconfiguring 18m ops-request-controller Start to process the Reconfiguring opsRequest "yjtest1-reconfiguring-4nf9k" in Cluster: yjtest1 Warning ApplyResourcesFailed 18m cluster-controller Operation cannot be fulfilled on statefulsets.apps "yjtest1-postgresql": the object has been modified; please apply your changes to the latest version and try again Warning ComponentsNotReady 18m cluster-controller pods are unavailable in Components: [postgresql], refer to related component message in Cluster.status.components Warning ReplicasNotReady 18m (x2 over 18m) cluster-controller pods are not ready in Components: [postgresql], refer to related component message in Cluster.status.components Warning ApplyResourcesFailed 18m (x7 over 20m) cluster-controller the number of current replicationSet primary obj is not 1, pls check Normal ApplyResourcesSucceed 18m (x4 over 20m) cluster-controller Successfully applied for resources Normal Processed 17m cluster-controller Reconfiguring opsRequest: yjtest1-reconfiguring-4nf9k has been processed Normal ClusterReady 16m (x2 over 18m) cluster-controller Cluster: yjtest1 is ready, current phase is Running Normal AllReplicasReady 16m (x3 over 18m) cluster-controller all pods of components are ready, waiting for the probe detection successful Normal Running 16m (x2 over 18m) cluster-controller Cluster: yjtest1 is ready, current phase is Running Normal PreCheckSucceed 2m30s (x2 over 20m) cluster-controller The operator has started the provisioning of Cluster: yjtest1 Normal HorizontalScaling 2m30s ops-request-controller Start to process the HorizontalScaling opsRequest "yjtest1-horizontalscaling-xn7bq" in Cluster: yjtest1 Warning HorizontalScaling 2m30s cluster-controller HorizontalScaling opsRequest: yjtest1-horizontalscaling-xn7bq is processing Normal HorizontalScale 2m30s (x2 over 2m30s) cluster-controller Start horizontal scale component postgresql from 2 to 4 Warning Unhealthy 50s event-controller Pod yjtest1-postgresql-2: Readiness probe failed: 127.0.0.1:5432 - no response Warning Unhealthy 49s event-controller Pod yjtest1-postgresql-3: Readiness probe failed: 127.0.0.1:5432 - no response
➜ ~ k logs yjtest1-postgresql-2 -n kubeblocks Defaulted container "postgresql" out of: postgresql, metrics, kb-checkrole, config-manager, pg-init-container (init)
2023-05-03 04:49:40,652 WARNING: Kubernetes RBAC doesn't allow GET access to the 'kubernetes' endpoint in the 'default' namespace. Disabling 'bypass_api_service'. 2023-05-03 04:49:41,248 INFO: No PostgreSQL configuration items changed, nothing to reload. 2023-05-03 04:49:41,343 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:49:41,543 INFO: trying to bootstrap from leader 'yjtest1-postgresql-0' 1024+0 records in 1024+0 records out 16777216 bytes (17 MB, 16 MiB) copied, 0.690255 s, 24.3 MB/s 2023-05-03 04:49:49,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:49:49,145 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress NOTICE: base backup done, waiting for required WAL segments to be archived 2023-05-03 04:49:59,092 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:49:59,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:50:09,090 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:09,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:50:19,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:19,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:50:29,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:29,089 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:50:39,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:39,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress WARNING: still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments. 2023-05-03 04:50:49,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:49,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:50:59,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:59,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:51:09,090 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:09,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:51:19,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:19,089 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:51:29,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:29,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:51:39,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:39,089 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress WARNING: still waiting for all required WAL segments to be archived (120 seconds elapsed) HINT: Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments. 2023-05-03 04:51:49,092 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:49,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:51:59,092 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:59,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:52:09,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:52:09,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:52:19,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:52:19,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:52:29,090 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:52:29,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress
➜ ~ k describe pod yjtest1-postgresql-2 -n kubeblocks Name: yjtest1-postgresql-2 Namespace: kubeblocks Priority: 0 Node: gke-yjtest-default-pool-ee024711-n5m4/10.128.0.20 Start Time: Wed, 03 May 2023 12:49:22 +0800 Labels: app.kubernetes.io/component=postgresql app.kubernetes.io/instance=yjtest1 app.kubernetes.io/managed-by=kubeblocks app.kubernetes.io/name=postgresql app.kubernetes.io/version=postgresql-12.14.0 apps.kubeblocks.io/component-name=postgresql apps.kubeblocks.io/workload-type=Replication apps.kubeblocks.postgres.patroni/scope=yjtest1-postgresql-patroni controller-revision-hash=yjtest1-postgresql-65798d4f94 kubeblocks.io/role=secondary statefulset.kubernetes.io/pod-name=yjtest1-postgresql-2 Annotations: config.kubeblocks.io/restart-postgresql-configuration: 854db4457c status: {"conn_url":"postgres://10.104.2.136:5432/postgres","api_url":"http://10.104.2.136:8008/patroni","state":"creating replica","role":"uninit... Status: Running IP: 10.104.2.136 IPs: IP: 10.104.2.136 Controlled By: StatefulSet/yjtest1-postgresql Init Containers: pg-init-container: Container ID: containerd://1528107e7e291e54b17d208adb66cda697f17e81848c8ee9aa8d4fcec384e74a Image: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0 Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo@sha256:5e0b1211207b158ed43c109e5ff1be830e1bf5e7aff1f0dd3c90966804c5a143 Port:
Host Port:
Command:
/kb-scripts/init_container.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 03 May 2023 12:49:28 +0800
Finished: Wed, 03 May 2023 12:49:28 +0800
Ready: True
Restart Count: 0
Environment Variables from:
yjtest1-postgresql-env ConfigMap Optional: false
Environment:
KB_POD_NAME: yjtest1-postgresql-2 (v1:metadata.name)
KB_NAMESPACE: kubeblocks (v1:metadata.namespace)
KB_SA_NAME: (v1:spec.serviceAccountName)
KB_NODENAME: (v1:spec.nodeName)
KB_HOST_IP: (v1:status.hostIP)
KB_POD_IP: (v1:status.podIP)
KB_POD_IPS: (v1:status.podIPs)
KB_HOSTIP: (v1:status.hostIP)
KB_PODIP: (v1:status.podIP)
KB_PODIPS: (v1:status.podIPs)
KB_CLUSTER_NAME: yjtest1
KB_COMP_NAME: postgresql
KB_CLUSTER_COMP_NAME: yjtest1-postgresql
KB_POD_FQDN: $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
Mounts:
/home/postgres/conf from postgresql-config (rw)
/home/postgres/pgdata from data (rw)
/kb-scripts from scripts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wtr (ro)
Containers:
postgresql:
Container ID: containerd://2e1cd0345f17a387c97e1ddef159959c991dd515207d3bbb2624c5b89ab822df
Image: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0
Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo@sha256:5e0b1211207b158ed43c109e5ff1be830e1bf5e7aff1f0dd3c90966804c5a143
Ports: 5432/TCP, 8008/TCP
Host Ports: 0/TCP, 0/TCP
Command:
/kb-scripts/setup.sh
State: Running
Started: Wed, 03 May 2023 12:49:29 +0800
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 512Mi
Requests:
cpu: 100m
memory: 512Mi
Readiness: exec [/bin/sh -c -ee exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
[ -f /postgresql/tmp/.initialized ] || [ -f /postgresql/.initialized ]
] delay=25s timeout=5s period=30s #success=1 #failure=3
Environment Variables from:
yjtest1-postgresql-env ConfigMap Optional: false
Environment:
KB_POD_NAME: yjtest1-postgresql-2 (v1:metadata.name)
KB_NAMESPACE: kubeblocks (v1:metadata.namespace)
KB_SA_NAME: (v1:spec.serviceAccountName)
KB_NODENAME: (v1:spec.nodeName)
KB_HOST_IP: (v1:status.hostIP)
KB_POD_IP: (v1:status.podIP)
KB_POD_IPS: (v1:status.podIPs)
KB_HOSTIP: (v1:status.hostIP)
KB_PODIP: (v1:status.podIP)
KB_PODIPS: (v1:status.podIPs)
KB_CLUSTER_NAME: yjtest1
KB_COMP_NAME: postgresql
KB_CLUSTER_COMP_NAME: yjtest1-postgresql
KB_POD_FQDN: $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
DCS_ENABLE_KUBERNETES_API: true
KUBERNETES_USE_CONFIGMAPS: true
SCOPE: $(KB_CLUSTER_NAME)-$(KB_COMP_NAME)-patroni
KUBERNETES_SCOPE_LABEL: apps.kubeblocks.postgres.patroni/scope
KUBERNETES_ROLE_LABEL: apps.kubeblocks.postgres.patroni/role
KUBERNETES_LABELS: {"app.kubernetes.io/instance":"$(KB_CLUSTER_NAME)","apps.kubeblocks.io/component-name":"$(KB_COMP_NAME)"}
RESTORE_DATA_DIR: /home/postgres/pgdata/kb_restore
KB_PG_CONFIG_PATH: /home/postgres/conf/postgresql.conf
SPILO_CONFIGURATION: bootstrap:
initdb:
auth-local: trust
ALLOW_NOSSL: true PGROOT: /home/postgres/pgdata/pgroot POD_IP: (v1:status.podIP) POD_NAMESPACE: kubeblocks (v1:metadata.namespace) PGUSER_SUPERUSER: <set to the key 'username' in secret 'yjtest1-conn-credential'> Optional: false PGPASSWORD_SUPERUSER: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false PGUSER_ADMIN: superadmin PGPASSWORD_ADMIN: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false PGUSER_STANDBY: standby PGPASSWORD_STANDBY: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false PGUSER: <set to the key 'username' in secret 'yjtest1-conn-credential'> Optional: false PGPASSWORD: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false Mounts: /dev/shm from dshm (rw) /home/postgres/conf from postgresql-config (rw) /home/postgres/pgdata from data (rw) /kb-scripts from scripts (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wtr (ro) metrics: Container ID: containerd://d80231015c6ec23b3e70e569dc7c0a57a71f554131a528cedfd7120b2bf79d2f Image: registry.cn-hangzhou.aliyuncs.com/apecloud/postgres-exporter:0.11.1-debian-11-r66 Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/postgres-exporter@sha256:17c0bf751b9db5476a83a252caab6f26109a786b93fd83d4a73a2ea9c33e1e69 Port: 9187/TCP Host Port: 0/TCP Command: /opt/bitnami/postgres-exporter/bin/postgres_exporter --auto-discover-databases --extend.query-path=/opt/conf/custom-metrics.yaml --exclude-databases=template0,template1 --log.level=info State: Running Started: Wed, 03 May 2023 12:49:29 +0800 Ready: True Restart Count: 0 Liveness: http-get http://:http-metrics/ delay=5s timeout=5s period=10s #success=1 #failure=6 Readiness: http-get http://:http-metrics/ delay=5s timeout=5s period=10s #success=1 #failure=6 Environment Variables from: yjtest1-postgresql-env ConfigMap Optional: false Environment: KB_POD_NAME: yjtest1-postgresql-2 (v1:metadata.name) KB_NAMESPACE: kubeblocks (v1:metadata.namespace) KB_SA_NAME: (v1:spec.serviceAccountName) KB_NODENAME: (v1:spec.nodeName) KB_HOST_IP: (v1:status.hostIP) KB_POD_IP: (v1:status.podIP) KB_POD_IPS: (v1:status.podIPs) KB_HOSTIP: (v1:status.hostIP) KB_PODIP: (v1:status.podIP) KB_PODIPS: (v1:status.podIPs) KB_CLUSTER_NAME: yjtest1 KB_COMP_NAME: postgresql KB_CLUSTER_COMP_NAME: yjtest1-postgresql KB_POD_FQDN: $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc DATA_SOURCE_URI: 127.0.0.1:5432/postgres?sslmode=disable DATA_SOURCE_PASS: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false DATA_SOURCE_USER: <set to the key 'username' in secret 'yjtest1-conn-credential'> Optional: false Mounts: /opt/conf from postgresql-custom-metrics (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wtr (ro) kb-checkrole: Container ID: containerd://d14c922a7a2661ac79cc4c8d42dfa545717d53622c6f5ed1e607620ed7befc1c Image: registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15 Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools@sha256:c983538b5cf64e1ca5a55382067bee3bf2f275f6afe9c5c3eefd3caa141820a4 Ports: 3501/TCP, 50001/TCP Host Ports: 0/TCP, 0/TCP Command: probe --app-id batch-sdk --dapr-http-port 3501 --dapr-grpc-port 50001 --app-protocol http --log-level info --config /config/probe/config.yaml --components-path /config/probe/components State: Running Started: Wed, 03 May 2023 12:49:29 +0800 Ready: True Restart Count: 0 Readiness: exec [curl -X POST --max-time 1 --fail-with-body --silent -H Content-ComponentDefRef: application/json http://localhost:3501/v1.0/bindings/postgresql -d {"operation": "checkRole", "metadata":{"sql":""}}] delay=0s timeout=1s period=1s #success=1 #failure=2 Startup: tcp-socket :3501 delay=0s timeout=1s period=10s #success=1 #failure=3 Environment Variables from: yjtest1-postgresql-env ConfigMap Optional: false Environment: KB_POD_NAME: yjtest1-postgresql-2 (v1:metadata.name) KB_NAMESPACE: kubeblocks (v1:metadata.namespace) KB_SA_NAME: (v1:spec.serviceAccountName) KB_NODENAME: (v1:spec.nodeName) KB_HOST_IP: (v1:status.hostIP) KB_POD_IP: (v1:status.podIP) KB_POD_IPS: (v1:status.podIPs) KB_HOSTIP: (v1:status.hostIP) KB_PODIP: (v1:status.podIP) KB_PODIPS: (v1:status.podIPs) KB_CLUSTER_NAME: yjtest1 KB_COMP_NAME: postgresql KB_CLUSTER_COMP_NAME: yjtest1-postgresql KB_POD_FQDN: $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc KB_SERVICE_USER: <set to the key 'username' in secret 'yjtest1-conn-credential'> Optional: false KB_SERVICE_PASSWORD: <set to the key 'password' in secret 'yjtest1-conn-credential'> Optional: false KB_SERVICE_PORT: 5432 KB_SERVICE_ROLES: {} KB_SERVICE_CHARACTER_TYPE: postgresql Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wtr (ro) config-manager: Container ID: containerd://91d2b821ac8706265b5ed155dbe7529ad96e0e5fa22691fccb9021a134474975 Image: registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15 Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools@sha256:c983538b5cf64e1ca5a55382067bee3bf2f275f6afe9c5c3eefd3caa141820a4 Port:
Host Port:
Command:
/bin/reloader
Args:
--operator-update-enable
--log-level
info
--tcp
9901
--notify-type
tpl
--tpl-config
/opt/config/reload/reload.yaml
State: Running
Started: Wed, 03 May 2023 12:49:29 +0800
Ready: True
Restart Count: 0
Environment:
CONFIG_MANAGER_POD_IP: (v1:status.podIP)
DB_TYPE: postgresql
Mounts:
/home/postgres/conf from postgresql-config (rw)
/opt/config/reload from reload-manager-reload (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wtr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-yjtest1-postgresql-2
ReadOnly: false
dshm:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit:
postgresql-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: yjtest1-postgresql-postgresql-configuration
Optional: false
postgresql-custom-metrics:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: yjtest1-postgresql-postgresql-custom-metrics
Optional: false
scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: yjtest1-postgresql-postgresql-scripts
Optional: false
reload-manager-reload:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: patroni-reload-script-yjtest1
Optional: false
kube-api-access-96wtr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: dev=true:NoSchedule
kb-data=true:NoSchedule
large=true:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 3m54s default-scheduler Successfully assigned kubeblocks/yjtest1-postgresql-2 to gke-yjtest-default-pool-ee024711-n5m4 Normal SuccessfulAttachVolume 3m50s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-b2a8d70f-c41f-4314-b363-ece94cdd4e69" Normal Pulled 3m48s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0" already present on machine Normal Created 3m48s kubelet Created container pg-init-container Normal Started 3m48s kubelet Started container pg-init-container Normal Pulled 3m48s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0" already present on machine Normal Created 3m48s kubelet Created container postgresql Normal Created 3m47s kubelet Created container metrics Normal Pulled 3m47s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/postgres-exporter:0.11.1-debian-11-r66" already present on machine Normal Started 3m47s kubelet Started container postgresql Normal Started 3m47s kubelet Started container metrics Normal Pulled 3m47s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15" already present on machine Normal Created 3m47s kubelet Created container kb-checkrole Normal Started 3m47s kubelet Started container kb-checkrole Normal Pulled 3m47s kubelet Container image "registry.cn-hangzhou.aliyuncs.com/apecloud/kubeblocks-tools:0.5.0-beta.15" already present on machine Normal Created 3m47s kubelet Created container config-manager Normal Started 3m47s kubelet Started container config-manager Warning Unhealthy 3m38s kubelet Readiness probe failed: {"event":"Failed","message":"error executing select pg_is_in_recovery();: failed to connect to
host=localhost user=postgres database=postgres
: dial error (dial tcp [::1]:5432: connect: cannot assign requested address)","originalRole":""} Warning Unhealthy 18s (x9 over 3m18s) kubelet Readiness probe failed: 127.0.0.1:5432 - no response➜ ~ k exec -it yjtest1-postgresql-2 -n kubeblocks sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Defaulted container "postgresql" out of: postgresql, metrics, kb-checkrole, config-manager, pg-init-container (init)
psql
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory Is the server running locally and accepting connections on that socket?
exit
command terminated with exit code 2
➜ ~ k get cm yjtest1-postgresql-postgresql-configuration -n kubeblocks -o yaml apiVersion: v1 data: kb_pitr.conf: | method: kb_restore_from_time kb_restore_from_time: command: bash /home/postgres/pgdata/kb_restore/kb_restore.sh keep_existing_recovery_conf: false recovery_conf: {} kb_restore.conf: | method: kb_restore_from_backup kb_restore_from_backup: command: bash /home/postgres/pgdata/kb_restore/kb_restore.sh keep_existing_recovery_conf: false recovery_conf: restore_command: cp /home/postgres/pgdata/pgroot/arch/%f %p recovery_target_timeline: latest pg_hba.conf: | host all all 0.0.0.0/0 trust host all all ::/0 trust local all all trust host all all 127.0.0.1/32 trust host all all ::1/128 trust local replication all trust host replication all 0.0.0.0/0 md5 host replication all ::/0 md5 postgresql.conf: |
- Connection Settings -listen_addresses = '*'
kind: ConfigMap metadata: annotations: config.kubeblocks.io/disable-reconfigure: "false" config.kubeblocks.io/last-applied-configuration: '{"kb_pitr.conf":"method: kb_restore_from_time\nkb_restore_from_time:\n command: bash /home/postgres/pgdata/kb_restore/kb_restore.sh\n keep_existing_recovery_conf: false\n recovery_conf: {}\n","kb_restore.conf":"method: kb_restore_from_backup\nkb_restore_from_backup:\n command: bash /home/postgres/pgdata/kb_restore/kb_restore.sh\n keep_existing_recovery_conf: false\n recovery_conf:\n restore_command: cp /home/postgres/pgdata/pgroot/arch/%f %p\n recovery_target_timeline: latest\n","pg_hba.conf":"host all all 0.0.0.0/0 trust\nhost all all ::/0 trust\nlocal all all trust\nhost all all 127.0.0.1/32 trust\nhost all all ::1/128 trust\nlocal replication all trust\nhost replication all 0.0.0.0/0 md5\nhost replication all ::/0 md5\n","postgresql.conf":"#- Connection Settings -listen_addresses = ''*''\nport = ''5432''\narchive_command = ''wal_dir=/home/postgres/pgdata/pgroot/arcwal; wal_dir_today=/$(date +%Y%m%d); [[ $(date +%H%M) == 1200 ]] \u0026\u0026 rm -rf /$(date -d\"yesterday\" +%Y%m%d); mkdir -p \u0026\u0026 gzip -kqc %p \u003e /%f.gz''\narchive_mode = ''on''\nauto_explain.log_analyze = ''True''\nauto_explain.log_min_duration = ''1s''\nauto_explain.log_nested_statements = ''True''\nauto_explain.log_timing = ''True''\nauto_explain.log_verbose = ''True''\nautovacuum_analyze_scale_factor = ''0.05''\nautovacuum_freeze_max_age = ''100000000''\nautovacuum_max_workers = ''1''\nautovacuum_naptime = ''1min''\nautovacuum_vacuum_cost_delay = ''-1''\nautovacuum_vacuum_cost_limit = ''-1''\nautovacuum_vacuum_scale_factor = ''0.1''\nbgwriter_delay = ''10ms''\nbgwriter_lru_maxpages = ''800''\nbgwriter_lru_multiplier = ''5.0''\ncheckpoint_completion_target = ''0.95''\ncheckpoint_timeout = ''10min''\ncommit_delay = ''20''\ncommit_siblings = ''10''\ndeadlock_timeout = ''50ms''\ndefault_statistics_target = ''500''\neffective_cache_size = ''12GB''\nhot_standby = ''on''\nhot_standby_feedback = ''True''\nhuge_pages = ''try''\nidle_in_transaction_session_timeout = ''1h''\nlisten_addresses = ''0.0.0.0''\nlog_autovacuum_min_duration = ''1s''\nlog_checkpoints = ''True''\nlog_lock_waits = ''True''\nlog_min_duration_statement = ''100''\nlog_replication_commands = ''True''\nlog_statement = ''ddl''\n\n#maintenance_work_mem = ''3952MB''\nmax_connections = ''56''\nmax_locks_per_transaction = ''128''\nmax_logical_replication_workers = ''8''\nmax_parallel_maintenance_workers = ''2''\nmax_parallel_workers = ''8''\nmax_parallel_workers_per_gather = ''0''\nmax_prepared_transactions = ''0''\nmax_replication_slots = ''16''\nmax_standby_archive_delay = ''10min''\nmax_standby_streaming_delay = ''3min''\nmax_sync_workers_per_subscription = ''6''\nmax_wal_senders = ''24''\nmax_wal_size = ''100GB''\nmax_worker_processes = ''8''\nmin_wal_size = ''20GB''\npassword_encryption = ''md5''\npg_stat_statements.max = ''5000''\npg_stat_statements.track = ''all''\npg_stat_statements.track_planning = ''False''\npg_stat_statements.track_utility = ''False''\nrandom_page_cost = ''1.1''\n\n#auto generated\nshared_buffers = 512MB\n\n#shared_preload_libraries = ''pg_stat_statements,auto_explain,bg_mon,pgextwlist,pg_auth_mon,set_user,pg_cron,pg_stat_kcache''\nsuperuser_reserved_connections = ''10''\ntemp_file_limit = ''100GB''\n\n#timescaledb.max_background_workers = ''6''\n#timescaledb.telemetry_level = ''off''\ntrack_activity_query_size = ''8192''\ntrack_commit_timestamp = ''True''\ntrack_functions = ''all''\ntrack_io_timing = ''True''\nvacuum_cost_delay = ''2ms''\nvacuum_cost_limit = ''10000''\nvacuum_defer_cleanup_age = ''50000''\nwal_buffers = ''16MB''\nwal_level = ''replica''\nwal_log_hints = ''on''\nwal_receiver_status_interval = ''1s''\nwal_receiver_timeout = ''60s''\nwal_writer_delay = ''20ms''\nwal_writer_flush_after = ''1MB''\nwork_mem = ''32MB''\n"}' config.kubeblocks.io/last-applied-ops-name: yjtest1-reconfiguring-4nf9k config.kubeblocks.io/reconfigure-source: ops creationTimestamp: "2023-05-03T04:31:06Z" finalizers: