apecloud / kubeblocks

KubeBlocks is an open-source control plane software that runs and manages databases, message queues and other stateful applications on K8s.
https://kubeblocks.io
GNU Affero General Public License v3.0
2.22k stars 184 forks source link

[BUG]Cluster is abnormal and pod cannot connect after configure and Hscale PG cluster #3042

Closed ahjing99 closed 1 year ago

ahjing99 commented 1 year ago

➜ ~ kbcli version Kubernetes: v1.25.7-gke.1000 KubeBlocks: 0.5.0-beta.15 kbcli: 0.5.0-beta.15

  1. Create cluster with 2 replicas
    ➜  kbcli git:(main) ✗ kbcli cluster create yjtest1            --termination-policy=WipeOut            --node-labels '"dev=true"' --tolerations '"key=dev,value=true,operator=Equal,effect=NoSchedule","key=large,value=true,operator=Equal,effect=NoSchedule"'            --monitor=false --enable-all-logs=false --cluster-definition=postgresql --set cpu=100m,memory=0.5Gi,replicas=2,storage=1Gi --namespace kubeblocks
    Info: --cluster-version is not specified, ClusterVersion postgresql-12.14.0 is applied by default
    Cluster yjtest1 created
  2. configure
    ➜  ~ kbcli cluster configure yjtest1                    --component postgresql                     --config-spec postgresql-configuration                     --config-file postgresql.conf                     --set shared_buffers=512MB --namespace kubeblocks
    Warning: The parameter change you modified needs to be restarted, which may cause the cluster to be unavailable for a period of time. Do you need to continue...
    Please type "yes" to confirm: yes
    Will updated configure file meta:
    ConfigSpec: postgresql-configuration    ConfigFile: postgresql.conf   ComponentName: postgresql   ClusterName: yjtest1
    OpsRequest yjtest1-reconfiguring-4nf9k created successfully, you can view the progress:
    kbcli cluster describe-ops yjtest1-reconfiguring-4nf9k -n kubeblocks
  3. Hscale to 4 replicas
    ➜  ~ kbcli cluster hscale yjtest1                --components postgresql                 --replicas 4 --namespace kubeblocks
    Please type the name again(separate with white space when more than one): yjtest1
    OpsRequest yjtest1-horizontalscaling-xn7bq created successfully, you can view the progress:
    kbcli cluster describe-ops yjtest1-horizontalscaling-xn7bq -n kubeblocks
  4. Cluster is abnormal and the new hscaled pods cannot connected
    
    ➜  ~ k get pod -n kubeblocks |grep yjtest1
    yjtest1-postgresql-0                                     4/4     Running   0             17m
    yjtest1-postgresql-1                                     4/4     Running   0             17m
    yjtest1-postgresql-2                                     3/4     Running   0             2m6s
    yjtest1-postgresql-3                                     3/4     Running   0             2m5s

➜ ~ kbcli cluster list -n kubeblocks NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME yjtest kubeblocks postgresql postgresql-12.14.0 WipeOut Abnormal May 03,2023 12:07 UTC+0800 yjtest1 kubeblocks postgresql postgresql-12.14.0 WipeOut Abnormal May 03,2023 12:31 UTC+0800

➜ ~ k describe cluster yjtest1 -n kubeblocks Name: yjtest1 Namespace: kubeblocks Labels: clusterdefinition.kubeblocks.io/name=postgresql clusterversion.kubeblocks.io/name=postgresql-12.14.0 Annotations: API Version: apps.kubeblocks.io/v1alpha1 Kind: Cluster Metadata: Creation Timestamp: 2023-05-03T04:31:06Z Finalizers: cluster.kubeblocks.io/finalizer Generation: 3 Managed Fields: API Version: apps.kubeblocks.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:affinity: .: f:nodeLabels: .: f:dev: f:podAntiAffinity: f:tenancy: f:clusterDefinitionRef: f:clusterVersionRef: f:componentSpecs: .: k:{"name":"postgresql"}: .: f:componentDefRef: f:monitor: f:name: f:resources: .: f:limits: .: f:cpu: f:memory: f:requests: .: f:cpu: f:memory: f:serviceAccountName: f:switchPolicy: .: f:type: f:volumeClaimTemplates: f:terminationPolicy: f:tolerations: Manager: kbcli Operation: Update Time: 2023-05-03T04:31:06Z API Version: apps.kubeblocks.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: v:"cluster.kubeblocks.io/finalizer": f:labels: .: f:clusterdefinition.kubeblocks.io/name: f:clusterversion.kubeblocks.io/name: f:spec: f:componentSpecs: k:{"name":"postgresql"}: f:replicas: Manager: manager Operation: Update Time: 2023-05-03T04:49:17Z API Version: apps.kubeblocks.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:status: .: f:clusterDefGeneration: f:components: .: f:postgresql: .: f:message: .: f:Pod/yjtest1-postgresql-2: f:Pod/yjtest1-postgresql-3: f:phase: f:podsReady: f:replicationSetStatus: .: f:primary: .: f:pod: f:secondaries: f:conditions: f:observedGeneration: f:phase: Manager: manager Operation: Update Subresource: status Time: 2023-05-03T04:50:58Z Resource Version: 1015119 UID: 8ad81b45-e297-460e-9a14-741b6e80ab17 Spec: Affinity: Node Labels: Dev: true Pod Anti Affinity: Preferred Tenancy: SharedNode Cluster Definition Ref: postgresql Cluster Version Ref: postgresql-12.14.0 Component Specs: Component Def Ref: postgresql Monitor: false Name: postgresql Replicas: 4 Resources: Limits: Cpu: 100m Memory: 512Mi Requests: Cpu: 100m Memory: 512Mi Service Account Name: kb-sa-yjtest1 Switch Policy: Type: Noop Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Termination Policy: WipeOut Tolerations: Effect: NoSchedule Key: dev Operator: Equal Value: true Effect: NoSchedule Key: large Operator: Equal Value: true Status: Cluster Def Generation: 2 Components: Postgresql: Message: Pod/yjtest1-postgresql-2: Readiness probe failed: 127.0.0.1:5432 - no response ; Pod/yjtest1-postgresql-3: Readiness probe failed: 127.0.0.1:5432 - no response ; Phase: Abnormal Pods Ready: false Replication Set Status: Primary: Pod: yjtest1-postgresql-0 Secondaries: Pod: yjtest1-postgresql-1 Pod: yjtest1-postgresql-2 Pod: yjtest1-postgresql-3 Conditions: Last Transition Time: 2023-05-03T04:50:57Z Message: HorizontalScaling opsRequest: yjtest1-horizontalscaling-xn7bq has been processed Reason: Processed Status: True Type: LatestOpsRequestProcessed Last Transition Time: 2023-05-03T04:31:06Z Message: The operator has started the provisioning of Cluster: yjtest1 Observed Generation: 3 Reason: PreCheckSucceed Status: True Type: ProvisioningStarted Last Transition Time: 2023-05-03T04:33:36Z Message: Successfully applied for resources Observed Generation: 3 Reason: ApplyResourcesSucceed Status: True Type: ApplyResources Last Transition Time: 2023-05-03T04:49:18Z Message: pods are not ready in Components: [postgresql], refer to related component message in Cluster.status.components Reason: ReplicasNotReady Status: False Type: ReplicasReady Last Transition Time: 2023-05-03T04:49:18Z Message: pods are unavailable in Components: [postgresql], refer to related component message in Cluster.status.components Reason: ComponentsNotReady Status: False Type: Ready Observed Generation: 3 Phase: Abnormal Events: Type Reason Age From Message


Warning Unhealthy 18m event-controller Pod yjtest1-postgresql-1: Readiness probe failed: 127.0.0.1:5432 - no response Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbdataprotection Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbreplicator Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbadmin Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbmonitoring Normal SysAcctCreate 18m system-account-controller Created Accounts for cluster: yjtest1, component: postgresql, accounts: kbprobe Normal Reconfiguring 18m ops-request-controller Start to process the Reconfiguring opsRequest "yjtest1-reconfiguring-4nf9k" in Cluster: yjtest1 Warning ApplyResourcesFailed 18m cluster-controller Operation cannot be fulfilled on statefulsets.apps "yjtest1-postgresql": the object has been modified; please apply your changes to the latest version and try again Warning ComponentsNotReady 18m cluster-controller pods are unavailable in Components: [postgresql], refer to related component message in Cluster.status.components Warning ReplicasNotReady 18m (x2 over 18m) cluster-controller pods are not ready in Components: [postgresql], refer to related component message in Cluster.status.components Warning ApplyResourcesFailed 18m (x7 over 20m) cluster-controller the number of current replicationSet primary obj is not 1, pls check Normal ApplyResourcesSucceed 18m (x4 over 20m) cluster-controller Successfully applied for resources Normal Processed 17m cluster-controller Reconfiguring opsRequest: yjtest1-reconfiguring-4nf9k has been processed Normal ClusterReady 16m (x2 over 18m) cluster-controller Cluster: yjtest1 is ready, current phase is Running Normal AllReplicasReady 16m (x3 over 18m) cluster-controller all pods of components are ready, waiting for the probe detection successful Normal Running 16m (x2 over 18m) cluster-controller Cluster: yjtest1 is ready, current phase is Running Normal PreCheckSucceed 2m30s (x2 over 20m) cluster-controller The operator has started the provisioning of Cluster: yjtest1 Normal HorizontalScaling 2m30s ops-request-controller Start to process the HorizontalScaling opsRequest "yjtest1-horizontalscaling-xn7bq" in Cluster: yjtest1 Warning HorizontalScaling 2m30s cluster-controller HorizontalScaling opsRequest: yjtest1-horizontalscaling-xn7bq is processing Normal HorizontalScale 2m30s (x2 over 2m30s) cluster-controller Start horizontal scale component postgresql from 2 to 4 Warning Unhealthy 50s event-controller Pod yjtest1-postgresql-2: Readiness probe failed: 127.0.0.1:5432 - no response Warning Unhealthy 49s event-controller Pod yjtest1-postgresql-3: Readiness probe failed: 127.0.0.1:5432 - no response

➜ ~ k logs yjtest1-postgresql-2 -n kubeblocks Defaulted container "postgresql" out of: postgresql, metrics, kb-checkrole, config-manager, pg-init-container (init)

2023-05-03 04:49:40,652 WARNING: Kubernetes RBAC doesn't allow GET access to the 'kubernetes' endpoint in the 'default' namespace. Disabling 'bypass_api_service'. 2023-05-03 04:49:41,248 INFO: No PostgreSQL configuration items changed, nothing to reload. 2023-05-03 04:49:41,343 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:49:41,543 INFO: trying to bootstrap from leader 'yjtest1-postgresql-0' 1024+0 records in 1024+0 records out 16777216 bytes (17 MB, 16 MiB) copied, 0.690255 s, 24.3 MB/s 2023-05-03 04:49:49,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:49:49,145 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress NOTICE: base backup done, waiting for required WAL segments to be archived 2023-05-03 04:49:59,092 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:49:59,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:50:09,090 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:09,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:50:19,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:19,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:50:29,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:29,089 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:50:39,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:39,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress WARNING: still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments. 2023-05-03 04:50:49,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:49,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:50:59,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:50:59,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:51:09,090 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:09,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:51:19,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:19,089 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:51:29,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:29,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:51:39,089 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:39,089 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress WARNING: still waiting for all required WAL segments to be archived (120 seconds elapsed) HINT: Check that your archive_command is executing properly. You can safely cancel this backup, but the database backup will not be usable without all the WAL segments. 2023-05-03 04:51:49,092 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:49,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:51:59,092 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:51:59,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:52:09,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:52:09,092 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:52:19,091 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:52:19,091 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress 2023-05-03 04:52:29,090 INFO: Lock owner: yjtest1-postgresql-0; I am yjtest1-postgresql-2 2023-05-03 04:52:29,090 INFO: bootstrap from leader 'yjtest1-postgresql-0' in progress

➜ ~ k describe pod yjtest1-postgresql-2 -n kubeblocks Name: yjtest1-postgresql-2 Namespace: kubeblocks Priority: 0 Node: gke-yjtest-default-pool-ee024711-n5m4/10.128.0.20 Start Time: Wed, 03 May 2023 12:49:22 +0800 Labels: app.kubernetes.io/component=postgresql app.kubernetes.io/instance=yjtest1 app.kubernetes.io/managed-by=kubeblocks app.kubernetes.io/name=postgresql app.kubernetes.io/version=postgresql-12.14.0 apps.kubeblocks.io/component-name=postgresql apps.kubeblocks.io/workload-type=Replication apps.kubeblocks.postgres.patroni/scope=yjtest1-postgresql-patroni controller-revision-hash=yjtest1-postgresql-65798d4f94 kubeblocks.io/role=secondary statefulset.kubernetes.io/pod-name=yjtest1-postgresql-2 Annotations: config.kubeblocks.io/restart-postgresql-configuration: 854db4457c status: {"conn_url":"postgres://10.104.2.136:5432/postgres","api_url":"http://10.104.2.136:8008/patroni","state":"creating replica","role":"uninit... Status: Running IP: 10.104.2.136 IPs: IP: 10.104.2.136 Controlled By: StatefulSet/yjtest1-postgresql Init Containers: pg-init-container: Container ID: containerd://1528107e7e291e54b17d208adb66cda697f17e81848c8ee9aa8d4fcec384e74a Image: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0 Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo@sha256:5e0b1211207b158ed43c109e5ff1be830e1bf5e7aff1f0dd3c90966804c5a143 Port: Host Port: Command: /kb-scripts/init_container.sh State: Terminated Reason: Completed Exit Code: 0 Started: Wed, 03 May 2023 12:49:28 +0800 Finished: Wed, 03 May 2023 12:49:28 +0800 Ready: True Restart Count: 0 Environment Variables from: yjtest1-postgresql-env ConfigMap Optional: false Environment: KB_POD_NAME: yjtest1-postgresql-2 (v1:metadata.name) KB_NAMESPACE: kubeblocks (v1:metadata.namespace) KB_SA_NAME: (v1:spec.serviceAccountName) KB_NODENAME: (v1:spec.nodeName) KB_HOST_IP: (v1:status.hostIP) KB_POD_IP: (v1:status.podIP) KB_POD_IPS: (v1:status.podIPs) KB_HOSTIP: (v1:status.hostIP) KB_PODIP: (v1:status.podIP) KB_PODIPS: (v1:status.podIPs) KB_CLUSTER_NAME: yjtest1 KB_COMP_NAME: postgresql KB_CLUSTER_COMP_NAME: yjtest1-postgresql KB_POD_FQDN: $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc Mounts: /home/postgres/conf from postgresql-config (rw) /home/postgres/pgdata from data (rw) /kb-scripts from scripts (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96wtr (ro) Containers: postgresql: Container ID: containerd://2e1cd0345f17a387c97e1ddef159959c991dd515207d3bbb2624c5b89ab822df Image: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo:12.14.0 Image ID: registry.cn-hangzhou.aliyuncs.com/apecloud/spilo@sha256:5e0b1211207b158ed43c109e5ff1be830e1bf5e7aff1f0dd3c90966804c5a143 Ports: 5432/TCP, 8008/TCP Host Ports: 0/TCP, 0/TCP Command: /kb-scripts/setup.sh State: Running Started: Wed, 03 May 2023 12:49:29 +0800 Ready: False Restart Count: 0 Limits: cpu: 100m memory: 512Mi Requests: cpu: 100m memory: 512Mi Readiness: exec [/bin/sh -c -ee exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432 [ -f /postgresql/tmp/.initialized ] || [ -f /postgresql/.initialized ] ] delay=25s timeout=5s period=30s #success=1 #failure=3 Environment Variables from: yjtest1-postgresql-env ConfigMap Optional: false Environment: KB_POD_NAME: yjtest1-postgresql-2 (v1:metadata.name) KB_NAMESPACE: kubeblocks (v1:metadata.namespace) KB_SA_NAME: (v1:spec.serviceAccountName) KB_NODENAME: (v1:spec.nodeName) KB_HOST_IP: (v1:status.hostIP) KB_POD_IP: (v1:status.podIP) KB_POD_IPS: (v1:status.podIPs) KB_HOSTIP: (v1:status.hostIP) KB_PODIP: (v1:status.podIP) KB_PODIPS: (v1:status.podIPs) KB_CLUSTER_NAME: yjtest1 KB_COMP_NAME: postgresql KB_CLUSTER_COMP_NAME: yjtest1-postgresql KB_POD_FQDN: $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc DCS_ENABLE_KUBERNETES_API: true KUBERNETES_USE_CONFIGMAPS: true SCOPE: $(KB_CLUSTER_NAME)-$(KB_COMP_NAME)-patroni KUBERNETES_SCOPE_LABEL: apps.kubeblocks.postgres.patroni/scope KUBERNETES_ROLE_LABEL: apps.kubeblocks.postgres.patroni/role KUBERNETES_LABELS: {"app.kubernetes.io/instance":"$(KB_CLUSTER_NAME)","apps.kubeblocks.io/component-name":"$(KB_COMP_NAME)"} RESTORE_DATA_DIR: /home/postgres/pgdata/kb_restore KB_PG_CONFIG_PATH: /home/postgres/conf/postgresql.conf SPILO_CONFIGURATION: bootstrap: initdb:

➜ ~ k exec -it yjtest1-postgresql-2 -n kubeblocks sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Defaulted container "postgresql" out of: postgresql, metrics, kb-checkrole, config-manager, pg-init-container (init)

psql

psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory Is the server running locally and accepting connections on that socket?

exit

command terminated with exit code 2

➜ ~ k get cm yjtest1-postgresql-postgresql-configuration -n kubeblocks -o yaml apiVersion: v1 data: kb_pitr.conf: | method: kb_restore_from_time kb_restore_from_time: command: bash /home/postgres/pgdata/kb_restore/kb_restore.sh keep_existing_recovery_conf: false recovery_conf: {} kb_restore.conf: | method: kb_restore_from_backup kb_restore_from_backup: command: bash /home/postgres/pgdata/kb_restore/kb_restore.sh keep_existing_recovery_conf: false recovery_conf: restore_command: cp /home/postgres/pgdata/pgroot/arch/%f %p recovery_target_timeline: latest pg_hba.conf: | host all all 0.0.0.0/0 trust host all all ::/0 trust local all all trust host all all 127.0.0.1/32 trust host all all ::1/128 trust local replication all trust host replication all 0.0.0.0/0 md5 host replication all ::/0 md5 postgresql.conf: |

- Connection Settings -listen_addresses = '*'

port = '5432'
archive_command = 'wal_dir=/home/postgres/pgdata/pgroot/arcwal; wal_dir_today=/$(date +%Y%m%d); [[ $(date +%H%M) == 1200 ]] && rm -rf /$(date -d"yesterday" +%Y%m%d); mkdir -p  && gzip -kqc %p > /%f.gz'
archive_mode = 'on'
auto_explain.log_analyze = 'True'
auto_explain.log_min_duration = '1s'
auto_explain.log_nested_statements = 'True'
auto_explain.log_timing = 'True'
auto_explain.log_verbose = 'True'
autovacuum_analyze_scale_factor = '0.05'
autovacuum_freeze_max_age = '100000000'
autovacuum_max_workers = '1'
autovacuum_naptime = '1min'
autovacuum_vacuum_cost_delay = '-1'
autovacuum_vacuum_cost_limit = '-1'
autovacuum_vacuum_scale_factor = '0.1'
bgwriter_delay = '10ms'
bgwriter_lru_maxpages = '800'
bgwriter_lru_multiplier = '5.0'
checkpoint_completion_target = '0.95'
checkpoint_timeout = '10min'
commit_delay = '20'
commit_siblings = '10'
deadlock_timeout = '50ms'
default_statistics_target = '500'
effective_cache_size = '12GB'
hot_standby = 'on'
hot_standby_feedback = 'True'
huge_pages = 'try'
idle_in_transaction_session_timeout = '1h'
listen_addresses = '0.0.0.0'
log_autovacuum_min_duration = '1s'
log_checkpoints = 'True'
log_lock_waits = 'True'
log_min_duration_statement = '100'
log_replication_commands = 'True'
log_statement = 'ddl'

#maintenance_work_mem = '3952MB'
max_connections = '56'
max_locks_per_transaction = '128'
max_logical_replication_workers = '8'
max_parallel_maintenance_workers = '2'
max_parallel_workers = '8'
max_parallel_workers_per_gather = '0'
max_prepared_transactions = '0'
max_replication_slots = '16'
max_standby_archive_delay = '10min'
max_standby_streaming_delay = '3min'
max_sync_workers_per_subscription = '6'
max_wal_senders = '24'
max_wal_size = '100GB'
max_worker_processes = '8'
min_wal_size = '20GB'
password_encryption = 'md5'
pg_stat_statements.max = '5000'
pg_stat_statements.track = 'all'
pg_stat_statements.track_planning = 'False'
pg_stat_statements.track_utility = 'False'
random_page_cost = '1.1'

#auto generated
shared_buffers = 512MB

#shared_preload_libraries = 'pg_stat_statements,auto_explain,bg_mon,pgextwlist,pg_auth_mon,set_user,pg_cron,pg_stat_kcache'
superuser_reserved_connections = '10'
temp_file_limit = '100GB'

#timescaledb.max_background_workers = '6'
#timescaledb.telemetry_level = 'off'
track_activity_query_size = '8192'
track_commit_timestamp = 'True'
track_functions = 'all'
track_io_timing = 'True'
vacuum_cost_delay = '2ms'
vacuum_cost_limit = '10000'
vacuum_defer_cleanup_age = '50000'
wal_buffers = '16MB'
wal_level = 'replica'
wal_log_hints = 'on'
wal_receiver_status_interval = '1s'
wal_receiver_timeout = '60s'
wal_writer_delay = '20ms'
wal_writer_flush_after = '1MB'
work_mem = '32MB'

kind: ConfigMap metadata: annotations: config.kubeblocks.io/disable-reconfigure: "false" config.kubeblocks.io/last-applied-configuration: '{"kb_pitr.conf":"method: kb_restore_from_time\nkb_restore_from_time:\n command: bash /home/postgres/pgdata/kb_restore/kb_restore.sh\n keep_existing_recovery_conf: false\n recovery_conf: {}\n","kb_restore.conf":"method: kb_restore_from_backup\nkb_restore_from_backup:\n command: bash /home/postgres/pgdata/kb_restore/kb_restore.sh\n keep_existing_recovery_conf: false\n recovery_conf:\n restore_command: cp /home/postgres/pgdata/pgroot/arch/%f %p\n recovery_target_timeline: latest\n","pg_hba.conf":"host all all 0.0.0.0/0 trust\nhost all all ::/0 trust\nlocal all all trust\nhost all all 127.0.0.1/32 trust\nhost all all ::1/128 trust\nlocal replication all trust\nhost replication all 0.0.0.0/0 md5\nhost replication all ::/0 md5\n","postgresql.conf":"#- Connection Settings -listen_addresses = ''*''\nport = ''5432''\narchive_command = ''wal_dir=/home/postgres/pgdata/pgroot/arcwal; wal_dir_today=/$(date +%Y%m%d); [[ $(date +%H%M) == 1200 ]] \u0026\u0026 rm -rf /$(date -d\"yesterday\" +%Y%m%d); mkdir -p \u0026\u0026 gzip -kqc %p \u003e /%f.gz''\narchive_mode = ''on''\nauto_explain.log_analyze = ''True''\nauto_explain.log_min_duration = ''1s''\nauto_explain.log_nested_statements = ''True''\nauto_explain.log_timing = ''True''\nauto_explain.log_verbose = ''True''\nautovacuum_analyze_scale_factor = ''0.05''\nautovacuum_freeze_max_age = ''100000000''\nautovacuum_max_workers = ''1''\nautovacuum_naptime = ''1min''\nautovacuum_vacuum_cost_delay = ''-1''\nautovacuum_vacuum_cost_limit = ''-1''\nautovacuum_vacuum_scale_factor = ''0.1''\nbgwriter_delay = ''10ms''\nbgwriter_lru_maxpages = ''800''\nbgwriter_lru_multiplier = ''5.0''\ncheckpoint_completion_target = ''0.95''\ncheckpoint_timeout = ''10min''\ncommit_delay = ''20''\ncommit_siblings = ''10''\ndeadlock_timeout = ''50ms''\ndefault_statistics_target = ''500''\neffective_cache_size = ''12GB''\nhot_standby = ''on''\nhot_standby_feedback = ''True''\nhuge_pages = ''try''\nidle_in_transaction_session_timeout = ''1h''\nlisten_addresses = ''0.0.0.0''\nlog_autovacuum_min_duration = ''1s''\nlog_checkpoints = ''True''\nlog_lock_waits = ''True''\nlog_min_duration_statement = ''100''\nlog_replication_commands = ''True''\nlog_statement = ''ddl''\n\n#maintenance_work_mem = ''3952MB''\nmax_connections = ''56''\nmax_locks_per_transaction = ''128''\nmax_logical_replication_workers = ''8''\nmax_parallel_maintenance_workers = ''2''\nmax_parallel_workers = ''8''\nmax_parallel_workers_per_gather = ''0''\nmax_prepared_transactions = ''0''\nmax_replication_slots = ''16''\nmax_standby_archive_delay = ''10min''\nmax_standby_streaming_delay = ''3min''\nmax_sync_workers_per_subscription = ''6''\nmax_wal_senders = ''24''\nmax_wal_size = ''100GB''\nmax_worker_processes = ''8''\nmin_wal_size = ''20GB''\npassword_encryption = ''md5''\npg_stat_statements.max = ''5000''\npg_stat_statements.track = ''all''\npg_stat_statements.track_planning = ''False''\npg_stat_statements.track_utility = ''False''\nrandom_page_cost = ''1.1''\n\n#auto generated\nshared_buffers = 512MB\n\n#shared_preload_libraries = ''pg_stat_statements,auto_explain,bg_mon,pgextwlist,pg_auth_mon,set_user,pg_cron,pg_stat_kcache''\nsuperuser_reserved_connections = ''10''\ntemp_file_limit = ''100GB''\n\n#timescaledb.max_background_workers = ''6''\n#timescaledb.telemetry_level = ''off''\ntrack_activity_query_size = ''8192''\ntrack_commit_timestamp = ''True''\ntrack_functions = ''all''\ntrack_io_timing = ''True''\nvacuum_cost_delay = ''2ms''\nvacuum_cost_limit = ''10000''\nvacuum_defer_cleanup_age = ''50000''\nwal_buffers = ''16MB''\nwal_level = ''replica''\nwal_log_hints = ''on''\nwal_receiver_status_interval = ''1s''\nwal_receiver_timeout = ''60s''\nwal_writer_delay = ''20ms''\nwal_writer_flush_after = ''1MB''\nwork_mem = ''32MB''\n"}' config.kubeblocks.io/last-applied-ops-name: yjtest1-reconfiguring-4nf9k config.kubeblocks.io/reconfigure-source: ops creationTimestamp: "2023-05-03T04:31:06Z" finalizers:

sophon-zt commented 1 year ago

The reason is that the extension function of properties replaces the variable.