Seems the replica is unavailable first time when the cluster is Ready after increasing replicas.
How to reproduce:
install latest postgresql operator (v1.6.3 at this moment)
using postgres-operator/manifests/minimal-postgres-manifest.yaml with numberOfInstances: 1
apply and wait the cluster is Ready status
increase numberOfInstances -> 2
The fast way to see that -> watch kubectl get pg -w and k get pod -l application=spilo --show-labels -w. I'm facing with this issue when I write test to check replica after increasing numberOfInstances. So I need the first wait status is Ready, the second wait the label: spilo-role=replica and only after way replica exactly be available.
Which image of the operator are you using? Using the latest master's configuration
Where do you run it: cloud/kvm - minikube, cloud/kvm - k3s
Are you running Postgres Operator in production? no, but faced it on the k3s deploy
Type of issue? Bug report
This is expected behaviour. First Postgres starts up, then it receives the label from Patroni. There have been questions before on K8s probes but so far we either have (readiness) or want (liveness) them.
Seems the replica is unavailable first time when the cluster is Ready after increasing replicas.
How to reproduce:
postgres-operator/manifests/minimal-postgres-manifest.yaml
withnumberOfInstances: 1
numberOfInstances
-> 2kubectl get pg -w
andk get pod -l application=spilo --show-labels -w
. I'm facing with this issue when I write test to check replica after increasingnumberOfInstances
. So I need the first wait status is Ready, the second wait thelabel: spilo-role=replica
and only after way replica exactly be available.operator log - https://pastebin.com/aqu4b9M9
Which image of the operator are you using? Using the latest master's configuration Where do you run it: cloud/kvm - minikube, cloud/kvm - k3s Are you running Postgres Operator in production? no, but faced it on the k3s deploy Type of issue? Bug report