bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.97k stars 9.2k forks source link

Slave node doesn't synchronoused properly #6019

Closed ceyhunn closed 3 years ago

ceyhunn commented 3 years ago

Which chart: postgresql-ha 6.8.1

Describe the bug Slave node doesn't synchronoused properly. After restart slave node, it syncs data with primary node. At the beginning, sync is ok, but day by day sync is delayed, select queries get old data.

To Reproduce Steps to reproduce the behavior:

  1. Deploy helm chart with following config:
    
    ## Global Docker image parameters
    ## Please, note that this will override the image parameters, including dependencies, configured to use the global value
    ## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
    ##
    # global:
    #   imageRegistry: myRegistryName
    #   imagePullSecrets:
    #     - myRegistryKeySecretName
    #   storageClass: myStorageClass
    #   postgresql:
    #     username: customuser
    #     password: custompassword
    #     database: customdatabase
    #     repmgrUsername: repmgruser
    #     repmgrPassword: repmgrpassword
    #     repmgrDatabase: repmgrdatabase
    #     existingSecret: myExistingSecret
    #   ldap:
    #     bindpw: bindpassword
    #     existingSecret: myExistingSecret
    #   pgpool:
    #     adminUsername: adminuser
    #     adminPassword: adminpassword
    #     existingSecret: myExistingSecret

Bitnami PostgreSQL image

ref: https://hub.docker.com/r/bitnami/postgresql/tags/

postgresqlImage: registry: docker.io repository: bitnami/postgresql-repmgr tag: 13.2.0-debian-10-r21

Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'

ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images

pullPolicy: IfNotPresent

Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)

ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

pullSecrets:

- myRegistryKeySecretName

Set to true if you would like to see extra information on logs

debug: false

Bitnami Pgpool image

ref: https://hub.docker.com/r/bitnami/pgpool/tags/

pgpoolImage: registry: docker.io repository: bitnami/pgpool tag: 4.2.2-debian-10-r18

Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'

ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images

pullPolicy: IfNotPresent

Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)

ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

pullSecrets:

- myRegistryKeySecretName

Set to true if you would like to see extra information on logs

debug: false

Init containers parameters:

volumePermissions: Change the owner and group of the persistent volume mountpoint

volumePermissionsImage: registry: docker.io repository: bitnami/bitnami-shell tag: "10"

Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'

ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images

pullPolicy: Always

Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)

ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

pullSecrets:

- myRegistryKeySecretName

Bitnami PostgreSQL Prometheus exporter image

ref: https://hub.docker.com/r/bitnami/pgpool/tags/

metricsImage: registry: docker.io repository: bitnami/postgres-exporter tag: 0.8.0-debian-10-r375

Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'

ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images

pullPolicy: IfNotPresent

Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)

ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

pullSecrets:

- myRegistryKeySecretName

Set to true if you would like to see extra information on logs

debug: false

String to partially override common.names.fullname template (will maintain the release name)

nameOverride:

String to fully override common.names.fullname template

fullnameOverride:

Kubernetes Cluster Domain

clusterDomain: cluster.local

Pod Service Account

ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/

serviceAccount: enabled: false

Name of an already existing service account. Setting this value disables the automatic service account creation.

name:

Common annotations to add to all resources (sub-charts are not considered). Evaluated as a template

commonAnnotations: {}

Common labels to add to all resources (sub-charts are not considered). Evaluated as a template

commonLabels: {}

PostgreSQL parameters

postgresql:

Labels to add to the StatefulSet. Evaluated as template

labels: {}

Labels to add to the StatefulSet pods. Evaluated as template

podLabels: {}

Number of replicas to deploy

replicaCount: 2

Update strategy for PostgreSQL statefulset

ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies

updateStrategyType: RollingUpdate

Deployment pod host aliases

https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/

hostAliases: []

Additional pod annotations

ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/

podAnnotations: {}

Pod priority class

Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/

priorityClassName: ""

PostgreSQL pod affinity preset

ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity

Allowed values: soft, hard

podAffinityPreset: ""

PostgreSQL pod anti-affinity preset

ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity

Allowed values: soft, hard

podAntiAffinityPreset: soft

PostgreSQL node affinity preset

ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity

Allowed values: soft, hard

nodeAffinityPreset:

Node affinity type

## Allowed values: soft, hard
##
type: ""
## Node label key to match
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## Node label values to match
## E.g.
## values:
##   - e2e-az1
##   - e2e-az2
##
values: []

Affinity for PostgreSQL pods assignment

ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

Note: postgresql.podAffinityPreset, postgresql.podAntiAffinityPreset, and postgresql.nodeAffinityPreset will be ignored when it's set

affinity: {}

Node labels for PostgreSQL pods assignment

ref: https://kubernetes.io/docs/user-guide/node-selection/

nodeSelector: node: example-postgresqldb

Tolerations for PostgreSQL pods assignment

ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

tolerations: []

K8s Security Context

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

securityContext: enabled: true fsGroup: 1001

Container Security Context

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

containerSecurityContext: enabled: true runAsUser: 1001

Custom Liveness probe

customLivenessProbe: {}

Custom Readiness probe

customReadinessProbe: {}

Container command (using container default if not set)

command:

Container args (using container default if not set)

args:

lifecycleHooks for the container to automate configuration before or after startup.

lifecycleHooks:

An array to add extra env vars

For example:

extraEnvVars:

Pgpool parameters

pgpool:

Additional users that will be performing connections to the database using

pgpool. Use this property in order to create new user/password entries that

will be appended to the "pgpool_passwd" file

customUsers: usernames: "example_user" passwords: "example123"

Comma or semicolon separated list of postgres usernames

usernames: 'user01;user02'

Comma or semicolon separated list of the associated passwords for the

users above

passwords: 'pass01;pass02'

Deployment pod host aliases

https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/

hostAliases: []

Alternatively, you can provide the name of a secret containing this information.

The secret must contain the keys "usernames" and "passwords" respectively.

customUsersSecret:

Database to perform streaming replication checks

srCheckDatabase: example_db

Labels to add to the Deployment. Evaluated as template

labels: {}

Labels to add to the pods. Evaluated as template

podLabels: {}

Labels to add to the service. Evaluated as template

serviceLabels: {}

Custom Liveness probe

customLivenessProbe: {}

Custom Readiness probe

customReadinessProbe: {}

Container command (using container default if not set)

command:

Container args (using container default if not set)

args:

lifecycleHooks for the container to automate configuration before or after startup.

lifecycleHooks:

An array to add extra env vars

For example:

extraEnvVars:

LDAP parameters

ldap: enabled: false

Retrieve LDAP bindpw from existing secret

existingSecret: myExistingSecret

uri: base: binddn: bindpw: bslookup: scope: tlsReqcert: nssInitgroupsIgnoreusers: root,nslcd

Init Container parameters

volumePermissions: enabled: false

K8s Security Context

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

securityContext: runAsUser: 0

Init container' resource requests and limits

ref: http://kubernetes.io/docs/user-guide/compute-resources/

resources:

We usually recommend not to specify default resources and to leave this as a conscious

# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
#   cpu: 100m
#   memory: 128Mi
requests: {}
#   cpu: 100m
#   memory: 128Mi

PostgreSQL Prometheus exporter parameters

metrics: enabled: false

K8s Security Context

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

securityContext: enabled: true runAsUser: 1001

Prometheus exporter containers' resource requests and limits

ref: http://kubernetes.io/docs/user-guide/compute-resources/

resources:

We usually recommend not to specify default resources and to leave this as a conscious

# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
#   cpu: 250m
#   memory: 256Mi
requests: {}
#   cpu: 250m
#   memory: 256Mi

Prometheus exporter container's liveness and readiness probes

ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes

livenessProbe: enabled: true initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 readinessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6

Annotations for Prometheus exporter

annotations: prometheus.io/scrape: "true" prometheus.io/port: "9187"

Enable this if you're using Prometheus Operator

serviceMonitor: enabled: false

## Specify a namespace if needed
# namespace: monitoring
# fallback to the prometheus default unless specified
# interval: 10s
# scrapeTimeout: 10s

## Defaults to what's used if you follow CoreOS [Prometheus Install Instructions](https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#tldr)
## [Prometheus Selector Label](https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#prometheus-operator-1)
## [Kube Prometheus Selector Label](https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#exporters)
##
selector:
  prometheus: kube-prometheus

## RelabelConfigs to apply to samples before scraping
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#relabelconfig
## Value is evalued as a template
##
relabelings: []

## MetricRelabelConfigs to apply to samples before ingestion
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#relabelconfig
## Value is evalued as a template
##
metricRelabelings: []

Persistence parameters

persistence: enabled: true

A manually managed Persistent Volume and Claim

If defined, PVC must be created manually before volume will be bound.

All replicas will share this PVC, using existingClaim with

replicas > 1 is only useful in very special use cases.

The value is evaluated as a template.

existingClaim:

Persistent Volume Storage Class

If defined, storageClassName:

If set to "-", storageClassName: "", which disables dynamic provisioning

If undefined (the default) or set to null, no storageClassName spec is

set, choosing the default provisioner.

storageClass: "-"

The path the volume will be mounted at, useful when using different

PostgreSQL images.

mountPath: /bitnami/postgresql

Persistent Volume Access Mode

accessModes:

PgPool service parameters

service:

Service type

type: NodePort

Service Port

port: 5432

Specify the nodePort value for the LoadBalancer and NodePort service types.

ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport

nodePort: 30202

Set the LoadBalancer service type to internal only.

ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer

loadBalancerIP:

Load Balancer sources

https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service

loadBalancerSourceRanges:

- 10.10.10.0/24

Set the Cluster IP to use

ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address

clusterIP: None

Provide any additional annotations which may be required

annotations: {}

Labels to add to the service. Evaluated as template

serviceLabels: {}

NetworkPolicy parameters

networkPolicy: enabled: false

The Policy model to apply. When set to false, only pods with the correct

client labels will have network access to the port PostgreSQL is listening

on. When true, PostgreSQL will accept connections from any source

(with the correct destination port).

allowExternal: true

Array with extra yaml to deploy with the chart. Evaluated as a template

extraDeploy: []


I think problem related to` /bitnami/postgresql/conf/postgresql.conf` file configs, because on primary node ` synchronous_standby_names` is commented.

wal_level = 'hot_standby'

Set these on the master and on any standby that will send replication data.

These settings are ignored on a standby server.

synchronous_standby_names = '' # standby servers that provide sync rep

                            # method to choose sync standbys, number of sync standbys,
                            # from standby(s); '*' = all

hot_standby = 'on'

max_standby_archive_delay = 30s # max delay before canceling queries

max_standby_streaming_delay = 30s # max delay before canceling queries

hot_standby_feedback = off # send info from standby to prevent

standard_conforming_strings = on


**Expected behavior**
A clear and concise description of what you expected to happen.

**Version of Helm and Kubernetes**:

- Output of `helm version`:

version.BuildInfo{Version:"v3.3.3-rancher3", GitCommit:"657df59bbba1d9e175cf5080d4885bd57d037906", GitTreeState:"clean", GoVersion:"go1.13.15"}


- Output of `kubectl version`:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:03:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}



**Additional context**
Add any other context about the problem here.
miguelaeh commented 3 years ago

Hi @ceyhunn ,

I think problem related to /bitnami/postgresql/conf/postgresql.conf file configs, because on primary node synchronous_standby_names is commented.

Could you confirm if this solves the issue?

ceyhunn commented 3 years ago

Hi @miguelaeh , I don't know exactly, but I saw from this source: https://github.com/bitnami/charts/issues/1414#issuecomment-532847857

miguelaeh commented 3 years ago

Hi @ceyhunn , Could you give it a try? If that solves the issue maybe we should configure it when setting syncReplication.

github-actions[bot] commented 3 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 3 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

ceyhunn commented 3 years ago

Hi @miguelaeh , sorry for late answer

Could you give it a try? If that solves the issue maybe we should configure it when setting syncReplication.

No, it doesn't, detailed information I wrote here.

github-actions[bot] commented 3 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

ceyhunn commented 3 years ago

Any news for this problem? bot automatically closed issue.

miguelaeh commented 3 years ago

Hi @ceyhunn , I see there is much more information in the other thread. Since @rafariossaa already created the internal task to revisit this, I think we can continue the thread there.

ceyhunn commented 3 years ago

Hi @miguelaeh , thank you for the answer. I will wait you, because I need native replication mode (syncReplication=true)and loadBalancing = true configs in order to use postgressql-ha.