Closed ceyhunn closed 3 years ago
Hi @ceyhunn ,
I think problem related to /bitnami/postgresql/conf/postgresql.conf file configs, because on primary node synchronous_standby_names is commented.
Could you confirm if this solves the issue?
Hi @miguelaeh , I don't know exactly, but I saw from this source: https://github.com/bitnami/charts/issues/1414#issuecomment-532847857
Hi @ceyhunn ,
Could you give it a try? If that solves the issue maybe we should configure it when setting syncReplication
.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Hi @miguelaeh , sorry for late answer
Could you give it a try? If that solves the issue maybe we should configure it when setting
syncReplication
.
No, it doesn't, detailed information I wrote here.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Any news for this problem? bot automatically closed issue.
Hi @ceyhunn , I see there is much more information in the other thread. Since @rafariossaa already created the internal task to revisit this, I think we can continue the thread there.
Hi @miguelaeh , thank you for the answer. I will wait you, because I need native replication mode (syncReplication=true
)and loadBalancing = true
configs in order to use postgressql-ha.
Which chart: postgresql-ha 6.8.1
Describe the bug Slave node doesn't synchronoused properly. After restart slave node, it syncs data with primary node. At the beginning, sync is ok, but day by day sync is delayed, select queries get old data.
To Reproduce Steps to reproduce the behavior:
Bitnami PostgreSQL image
ref: https://hub.docker.com/r/bitnami/postgresql/tags/
postgresqlImage: registry: docker.io repository: bitnami/postgresql-repmgr tag: 13.2.0-debian-10-r21
Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
pullPolicy: IfNotPresent
Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
pullSecrets:
- myRegistryKeySecretName
Set to true if you would like to see extra information on logs
debug: false
Bitnami Pgpool image
ref: https://hub.docker.com/r/bitnami/pgpool/tags/
pgpoolImage: registry: docker.io repository: bitnami/pgpool tag: 4.2.2-debian-10-r18
Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
pullPolicy: IfNotPresent
Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
pullSecrets:
- myRegistryKeySecretName
Set to true if you would like to see extra information on logs
debug: false
Init containers parameters:
volumePermissions: Change the owner and group of the persistent volume mountpoint
volumePermissionsImage: registry: docker.io repository: bitnami/bitnami-shell tag: "10"
Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
pullPolicy: Always
Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
pullSecrets:
- myRegistryKeySecretName
Bitnami PostgreSQL Prometheus exporter image
ref: https://hub.docker.com/r/bitnami/pgpool/tags/
metricsImage: registry: docker.io repository: bitnami/postgres-exporter tag: 0.8.0-debian-10-r375
Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
pullPolicy: IfNotPresent
Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
pullSecrets:
- myRegistryKeySecretName
Set to true if you would like to see extra information on logs
debug: false
String to partially override common.names.fullname template (will maintain the release name)
nameOverride:
String to fully override common.names.fullname template
fullnameOverride:
Kubernetes Cluster Domain
clusterDomain: cluster.local
Pod Service Account
ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount: enabled: false
Name of an already existing service account. Setting this value disables the automatic service account creation.
name:
Common annotations to add to all resources (sub-charts are not considered). Evaluated as a template
commonAnnotations: {}
Common labels to add to all resources (sub-charts are not considered). Evaluated as a template
commonLabels: {}
PostgreSQL parameters
postgresql:
Labels to add to the StatefulSet. Evaluated as template
labels: {}
Labels to add to the StatefulSet pods. Evaluated as template
podLabels: {}
Number of replicas to deploy
replicaCount: 2
Update strategy for PostgreSQL statefulset
ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategyType: RollingUpdate
Deployment pod host aliases
https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
hostAliases: []
Additional pod annotations
ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
Pod priority class
Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
priorityClassName: ""
PostgreSQL pod affinity preset
ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
Allowed values: soft, hard
podAffinityPreset: ""
PostgreSQL pod anti-affinity preset
ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
Allowed values: soft, hard
podAntiAffinityPreset: soft
PostgreSQL node affinity preset
ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
Allowed values: soft, hard
nodeAffinityPreset:
Node affinity type
Affinity for PostgreSQL pods assignment
ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
Note: postgresql.podAffinityPreset, postgresql.podAntiAffinityPreset, and postgresql.nodeAffinityPreset will be ignored when it's set
affinity: {}
Node labels for PostgreSQL pods assignment
ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: node: example-postgresqldb
Tolerations for PostgreSQL pods assignment
ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
K8s Security Context
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext: enabled: true fsGroup: 1001
Container Security Context
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
containerSecurityContext: enabled: true runAsUser: 1001
Custom Liveness probe
customLivenessProbe: {}
Custom Readiness probe
customReadinessProbe: {}
Container command (using container default if not set)
command:
Container args (using container default if not set)
args:
lifecycleHooks for the container to automate configuration before or after startup.
lifecycleHooks:
An array to add extra env vars
For example:
extraEnvVars:
name: POSTGRESQL_MAX_WAL_SIZE value: "8GB"
ConfigMap with extra environment variables
extraEnvVarsCM:
Secret with extra environment variables
extraEnvVarsSecret:
Extra volumes to add to the deployment
extraVolumes:
emptyDir: medium: Memory sizeLimit: 8Gi name: dshm
Extra volume mounts to add to the container
extraVolumeMounts:
mountPath: /dev/shm name: dshm
Extra init containers to add to the deployment
initContainers: []
Extra sidecar containers to add to the deployment
sidecars: []
PostgreSQL containers' resource requests and limits
ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
We usually recommend not to specify default resources and to leave this as a conscious
choice for the user. This also increases chances charts run on environments with little
resources, such as Minikube. If you do want to specify resources, uncomment the following
lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
cpu: 250m
memory: 256Mi
requests: {}
cpu: 250m
memory: 256Mi
PostgreSQL container's liveness and readiness probes
ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
livenessProbe: enabled: true initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 readinessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6
Pod disruption budget configuration
pdb:
Specifies whether a Pod disruption budget should be created
create: false minAvailable: 1
maxUnavailable: 1
PostgreSQL configuration parameters
username: postgres password: "example1"
database:
PostgreSQL password using existing secret
existingSecret: secret
PostgreSQL admin password (used when
postgresql.username
is notpostgres
)ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-user-on-first-run (see note!)
postgresPassword:
Mount PostgreSQL secret as a file instead of passing environment variable
usePasswordFile: false
Upgrade repmgr extension in the database
upgradeRepmgrExtension: false
Configures pg_hba.conf to trust every user
pgHbaTrustAll: false
Synchronous replication
syncReplication: true
Repmgr configuration parameters
repmgrUsername: repmgr repmgrPassword: "example1" repmgrDatabase: repmgr repmgrLogLevel: NOTICE repmgrConnectTimeout: 5 repmgrReconnectAttempts: 3 repmgrReconnectInterval: 5
Audit settings
https://github.com/bitnami/bitnami-docker-postgresql#auditing
audit:
Log client hostnames
logHostname: true
Log connections to the server
logConnections: false
Log disconnections
logDisconnections: false
Operation to audit using pgAudit (default if not set)
pgAuditLog: ""
Log catalog using pgAudit
pgAuditLogCatalog: "off"
Log level for clients
clientMinMessages: error
Template for log line prefix (default if not set)
logLinePrefix: ""
Log timezone
logTimezone: ""
Shared preload libraries
sharedPreloadLibraries: "pgaudit, repmgr"
Maximum total connections
maxConnections: 50000
Maximum connections for the postgres user
postgresConnectionLimit:
Maximum connections for the created user
dbUserConnectionLimit:
TCP keepalives interval
tcpKeepalivesInterval:
TCP keepalives idle
tcpKeepalivesIdle:
TCP keepalives count
tcpKeepalivesCount:
Statement timeout
statementTimeout:
Remove pg_hba.conf lines with the following comma-separated patterns
(cannot be used with custom pg_hba.conf)
pghbaRemoveFilters:
Extra init containers
Example
extraInitContainers:
- name: do-something
image: busybox
command: ['do', 'something']
extraInitContainers: []
Repmgr configuration
You can use this parameter to specify the content for repmgr.conf
Otherwise, a repmgr.conf will be generated based on the environment variables
ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#configuration
ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#configuration-file
Example:
repmgrConfiguration: |-
ssh_options='-o "StrictHostKeyChecking no" -v'
use_replication_slots='1'
...
repmgrConfiguration: ""
PostgreSQL configuration
You can use this parameter to specify the content for postgresql.conf
Otherwise, a repmgr.conf will be generated based on the environment variables
ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#configuration
ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#configuration-file
Example:
configuration: |-
listen_addresses = '*'
port = '5432'
...
configuration: ""
PostgreSQL client authentication configuration
You can use this parameter to specify the content for pg_hba.conf
Otherwise, a repmgr.conf will be generated based on the environment variables
ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#configuration
ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#configuration-file
Example:
pgHbaConfiguration: |-
host all repmgr 0.0.0.0/0 md5
host repmgr repmgr 0.0.0.0/0 md
...
pgHbaConfiguration: ""
Name of existing ConfigMap with configuration files
NOTE: This will override postgresql.repmgrConfiguration, postgresql.configuration and postgresql.pgHbaConfiguration
configurationCM:
PostgreSQL extended configuration
Similar to postgresql.configuration, but appended to the main configuration
ref: https://github.com/bitnami/bitnami-docker-postgresql-repmgr#allow-settings-to-be-loaded-from-files-other-than-the-default-postgresqlconf
Example:
extendedConf: |-
deadlock_timeout = 1s
max_locks_per_transaction = 64
...
extendedConf: ""
ConfigMap with PostgreSQL extended configuration
NOTE: This will override postgresql.extendedConf
extendedConfCM:
initdb scripts
Specify dictionary of scripts to be run at first boot
The allowed extensions are
.sh
,.sql
and.sql.gz
ref: https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha#initialize-a-fresh-instance
initdbScripts:
my_init_script.sh: |
!/bin/sh
echo "Do something."
ConfigMap with scripts to be run at first boot
NOTE: This will override initdbScripts
initdbScriptsCM:
Secret with scripts to be run at first boot
Note: can be used with initdbScriptsCM or initdbScripts
initdbScriptsSecret:
Pgpool parameters
pgpool:
Additional users that will be performing connections to the database using
pgpool. Use this property in order to create new user/password entries that
will be appended to the "pgpool_passwd" file
customUsers: usernames: "example_user" passwords: "example123"
Comma or semicolon separated list of postgres usernames
usernames: 'user01;user02'
Comma or semicolon separated list of the associated passwords for the
users above
passwords: 'pass01;pass02'
Deployment pod host aliases
https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
hostAliases: []
Alternatively, you can provide the name of a secret containing this information.
The secret must contain the keys "usernames" and "passwords" respectively.
customUsersSecret:
Database to perform streaming replication checks
srCheckDatabase: example_db
Labels to add to the Deployment. Evaluated as template
labels: {}
Labels to add to the pods. Evaluated as template
podLabels: {}
Labels to add to the service. Evaluated as template
serviceLabels: {}
Custom Liveness probe
customLivenessProbe: {}
Custom Readiness probe
customReadinessProbe: {}
Container command (using container default if not set)
command:
Container args (using container default if not set)
args:
lifecycleHooks for the container to automate configuration before or after startup.
lifecycleHooks:
An array to add extra env vars
For example:
extraEnvVars:
name: PGPOOL_SR_CHECK_PERIOD value: "1"
- name: BEARER_AUTH
value: true
ConfigMap with extra environment variables
extraEnvVarsCM:
Secret with extra environment variables
extraEnvVarsSecret:
Extra volumes to add to the deployment
extraVolumes: []
Extra volume mounts to add to the container
extraVolumeMounts: []
Extra init containers to add to the deployment
initContainers: []
Extra sidecar containers to add to the deployment
sidecars: []
Number of replicas to deploy
replicaCount: 1
Additional pod annotations
ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
Pod priority class
Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
priorityClassName: ""
Pgpool pod affinity preset
ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
Allowed values: soft, hard
podAffinityPreset: ""
Pgpool pod anti-affinity preset
ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
Allowed values: soft, hard
podAntiAffinityPreset: soft
Pgpool node affinity preset
ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
Allowed values: soft, hard
nodeAffinityPreset:
Node affinity type
Allowed values: soft, hard
type: ""
Node label key to match
E.g.
key: "kubernetes.io/e2e-az-name"
key: ""
Node label values to match
E.g.
values:
- e2e-az1
- e2e-az2
values: []
Affinity for Pgpool pods assignment
ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
Note: pgpool.podAffinityPreset, pgpool.podAntiAffinityPreset, and pgpool.nodeAffinityPreset will be ignored when it's set
affinity: {}
Node labels for Pgpool pods assignment
ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: node: example-pgpool
Tolerations for Pgpool pods assignment
ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
K8s Security Context
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext: enabled: true fsGroup: 1001
Container Security Context
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
containerSecurityContext: enabled: true runAsUser: 1001
Pgpool containers' resource requests and limits
ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
We usually recommend not to specify default resources and to leave this as a conscious
choice for the user. This also increases chances charts run on environments with little
resources, such as Minikube. If you do want to specify resources, uncomment the following
lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
cpu: 250m
memory: 256Mi
requests: {}
cpu: 250m
memory: 256Mi
Pgpool container's liveness and readiness probes
ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
livenessProbe: enabled: true initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5
Pod disruption budget configuration
pdb:
Specifies whether a Pod disruption budget should be created
create: false minAvailable: 1
maxUnavailable: 1
strategy used to replace old Pods by new ones
ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
updateStrategy: {}
minReadySeconds to avoid killing pods before we are ready
minReadySeconds: 0
Credentials for the pgpool administrator
adminUsername: admin adminPassword: "example1"
Log all client connections (PGPOOL_ENABLE_LOG_CONNECTIONS)
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
logConnections: false
Log the client hostname instead of IP address (PGPOOL_ENABLE_LOG_HOSTNAME)
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
logHostname: true
Log every SQL statement for each DB node separately (PGPOOL_ENABLE_LOG_PER_NODE_STATEMENT)
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
logPerNodeStatement: false
Format of the log entry lines (PGPOOL_LOG_LINE_PREFIX)
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
ref: https://www.pgpool.net/docs/latest/en/html/runtime-config-logging.html
logLinePrefix:
Log level for clients
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
clientMinMessages: error
The number of preforked Pgpool-II server processes. It is also the concurrent
connections limit to Pgpool-II from clients. Must be a positive integer. (PGPOOL_NUM_INIT_CHILDREN)
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
numInitChildren: 100
The maximum number of cached connections in each child process (PGPOOL_MAX_POOL)
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
maxPool: 300
The maximum number of client connections in each child process (PGPOOL_CHILD_MAX_CONNECTIONS)
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
childMaxConnections:
The time in seconds to terminate a Pgpool-II child process if it remains idle (PGPOOL_CHILD_LIFE_TIME)
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
childLifeTime:
The time in seconds to disconnect a client if it remains idle since the last query (PGPOOL_CLIENT_IDLE_LIMIT)
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
clientIdleLimit:
The time in seconds to terminate the cached connections to the PostgreSQL backend (PGPOOL_CONNECTION_LIFE_TIME)
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
connectionLifeTime:
Use Pgpool Load-Balancing
useLoadBalancing: true
Pgpool configuration
You can use this parameter to specify the content for pgpool.conf
Otherwise, a repmgr.conf will be generated based on the environment variables
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration
ref: https://github.com/bitnami/bitnami-docker-pgpool#configuration-file
Example:
configuration: |-
listen_addresses = '*'
port = '5432'
...
configuration: |-
sr_check_period = 1
ConfigMap with Pgpool configuration
NOTE: This will override pgpool.configuration parameter
configurationCM:
initdb scripts
Specify dictionary of scripts to be run every time Pgpool container is initialized
The allowed extension is
.sh
ref: https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha#initialize-a-fresh-instance
initdbScripts:
my_init_script.sh: |
!/bin/sh
echo "Do something."
ConfigMap with scripts to be run every time Pgpool container is initialized
NOTE: This will override pgpool.initdbScripts
initdbScriptsCM:
Secret with scripts to be run every time Pgpool container is initialized
Note: can be used with initdbScriptsCM or initdbScripts
initdbScriptsSecret:
TLS configuration
tls:
Enable TLS traffic
enabled: false
Whether to use the server's TLS cipher preferences rather than the client's.
preferServerCiphers: true
Name of the Secret that contains the certificates
certificatesSecret: ""
Certificate filename
certFilename: ""
Certificate Key filename
certKeyFilename: ""
CA Certificate filename
If provided, PgPool will authenticate TLS/SSL clients by requesting them a certificate
ref: https://www.pgpool.net/docs/latest/en/html/runtime-ssl.html
certCAFilename:
LDAP parameters
ldap: enabled: false
Retrieve LDAP bindpw from existing secret
existingSecret: myExistingSecret
uri: base: binddn: bindpw: bslookup: scope: tlsReqcert: nssInitgroupsIgnoreusers: root,nslcd
Init Container parameters
volumePermissions: enabled: false
K8s Security Context
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext: runAsUser: 0
Init container' resource requests and limits
ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
We usually recommend not to specify default resources and to leave this as a conscious
PostgreSQL Prometheus exporter parameters
metrics: enabled: false
K8s Security Context
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext: enabled: true runAsUser: 1001
Prometheus exporter containers' resource requests and limits
ref: http://kubernetes.io/docs/user-guide/compute-resources/
resources:
We usually recommend not to specify default resources and to leave this as a conscious
Prometheus exporter container's liveness and readiness probes
ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
livenessProbe: enabled: true initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 readinessProbe: enabled: true initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6
Annotations for Prometheus exporter
annotations: prometheus.io/scrape: "true" prometheus.io/port: "9187"
Enable this if you're using Prometheus Operator
serviceMonitor: enabled: false
Persistence parameters
persistence: enabled: true
A manually managed Persistent Volume and Claim
If defined, PVC must be created manually before volume will be bound.
All replicas will share this PVC, using existingClaim with
replicas > 1 is only useful in very special use cases.
The value is evaluated as a template.
existingClaim:
Persistent Volume Storage Class
If defined, storageClassName:
If set to "-", storageClassName: "", which disables dynamic provisioning
If undefined (the default) or set to null, no storageClassName spec is
set, choosing the default provisioner.
storageClass: "-"
The path the volume will be mounted at, useful when using different
PostgreSQL images.
mountPath: /bitnami/postgresql
Persistent Volume Access Mode
accessModes:
Persistent Volume Claim size
size: 50Gi
Persistent Volume Claim annotations
annotations: {}
selector can be used to match an existing PersistentVolume
selector:
matchLabels:
app: my-app
selector: {}
PgPool service parameters
service:
Service type
type: NodePort
Service Port
port: 5432
Specify the nodePort value for the LoadBalancer and NodePort service types.
ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
nodePort: 30202
Set the LoadBalancer service type to internal only.
ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
loadBalancerIP:
Load Balancer sources
https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
loadBalancerSourceRanges:
- 10.10.10.0/24
Set the Cluster IP to use
ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address
clusterIP: None
Provide any additional annotations which may be required
annotations: {}
Labels to add to the service. Evaluated as template
serviceLabels: {}
NetworkPolicy parameters
networkPolicy: enabled: false
The Policy model to apply. When set to false, only pods with the correct
client labels will have network access to the port PostgreSQL is listening
on. When true, PostgreSQL will accept connections from any source
(with the correct destination port).
allowExternal: true
Array with extra yaml to deploy with the chart. Evaluated as a template
extraDeploy: []
wal_level = 'hot_standby'
Set these on the master and on any standby that will send replication data.
These settings are ignored on a standby server.
synchronous_standby_names = '' # standby servers that provide sync rep
hot_standby = 'on'
max_standby_archive_delay = 30s # max delay before canceling queries
max_standby_streaming_delay = 30s # max delay before canceling queries
hot_standby_feedback = off # send info from standby to prevent
standard_conforming_strings = on
version.BuildInfo{Version:"v3.3.3-rancher3", GitCommit:"657df59bbba1d9e175cf5080d4885bd57d037906", GitTreeState:"clean", GoVersion:"go1.13.15"}
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:03:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}