Closed scriptac closed 1 year ago
Added the annotation for IPv6 in my templates/postgres.yaml
The annotation solution is merged but hasn't been released yet.
Created the following Config Map:
This is the workaround described in https://github.com/CrunchyData/postgres-operator/issues/3286#issuecomment-1261646438.
It works with the PGO you have, but there's one more step to finish it out. Mention the new ConfigMap in your values.yaml:
pgBackRestConfig:
configuration: # 👈 These two lines
- configMap: { name: my-pgbr-config } # 👈
global:
# server:
# tls-server-address: "::"
# server-ping:
# tls-server-address: localhost
repo1-retention-full: "1"
repo1-retention-full-type: count
Let me know how it goes!
Hi @scriptac, did the above help to get your replicas working as expected in the IPv6 environment?
Overview
I'm unable to get the secondary replica up in my postgres deployment. I also did the same deployment in a different test lab with the same Kubernetes, Helm, postgres and PGO versions. However, the test lab is IPv4 and the environment I'm deploying in right now is IPv6.
Environment
Platform:
robin.io
Platform Version:5.3.11-217
Kubernetes Version:v1.21.5
Helm Version:v3.5.3
PGO Image Tag:ubi8-5.1.1-0
Postgres Version:14
It is a restricted access environment, where I'm only the namespace admin and not the cluster admin.
Issue
The Helm chart creates 2 replicas, as a result 2 pods are created and the leader is elected successfully. The leader pod is up and running, but the replica is not.
The logs of the failed container:
My values.yaml for my PGO Chart:
My values.yaml file for the postgres cluster:
Solutions Tried So Far
Added the listen_address field in patroni.dynamicConfiguration.postgresql.parameters in the above YAML.
Created the following Config Map:
Added the annotation for IPv6 in my templates/postgres.yaml
I'm not 100% sure that this is an IPv6 issue, but since that's the only difference from my lab environment, it's the first thing I tried to troubleshoot.