CloudNativePG is a comprehensive platform designed to seamlessly manage PostgreSQL databases within Kubernetes environments, covering the entire operational lifecycle from initial deployment to ongoing maintenance
[X] I have searched for an existing issue, and could not find anything. I believe this is a new bug.
I have read the troubleshooting guide
[X] I have read the troubleshooting guide and I think this is a new bug.
I am running a supported version of CloudNativePG
[X] I have read the troubleshooting guide and I think this is a new bug.
Contact Details
No response
Version
1.23.2
What version of Kubernetes are you using?
1.27
What is your Kubernetes environment?
Cloud: Other
How did you install the operator?
YAML manifest
What happened?
Hi! I am trying to check how well operator prepared for detecting different Postgres malfunctions. Recently I've tried to simulate scenario with Postgres failing to start:
Despite the fact that now I have a fully broken cluster, cluster phase does not reflect that:
➜ ~ kubectl cnpg status postgresql-cluster
Cluster Summary
Name: postgresql-cluster
Namespace: ***
PostgreSQL Image: ***
Primary instance: postgresql-cluster-1
Primary start time: 2024-07-23 15:32:04 +0000 UTC (uptime 21h58m59s)
Status: Cluster in healthy state
Instances: 1
Ready instances: 0
Certificates Status
Certificate Name Expiration Date Days Left Until Expiration
---------------- --------------- --------------------------
cafile 2030-12-31 09:37:37 +0000 UTC 2350.84
cert-postgres-secret 2025-07-23 10:00:00 +0000 UTC 363.85
postgresql-cluster-ca 2024-10-21 15:26:18 +0000 UTC 89.08
postgresql-cluster-replication 2024-10-21 15:26:18 +0000 UTC 89.08
Continuous Backup status
First Point of Recoverability: 2024-07-23T15:32:24Z
No Primary instance found
Physical backups
Primary instance not found
Streaming Replication status
Not configured
Unmanaged Replication Slot Status
No unmanaged replication slots found
Managed roles status
No roles managed
Tablespaces status
No managed tablespaces
Pod Disruption Budgets status
Name Role Expected Pods Current Healthy Minimum Desired Healthy Disruptions Allowed
---- ---- ------------- --------------- ----------------------- -------------------
postgresql-cluster-primary primary 1 0 1 0
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
postgresql-cluster-1 - - - pod not available Guaranteed - xxx
In operator logs there are endless messages regarding pod connection errors:
Is there an existing issue already for this bug?
I have read the troubleshooting guide
I am running a supported version of CloudNativePG
Contact Details
No response
Version
1.23.2
What version of Kubernetes are you using?
1.27
What is your Kubernetes environment?
Cloud: Other
How did you install the operator?
YAML manifest
What happened?
Hi! I am trying to check how well operator prepared for detecting different Postgres malfunctions. Recently I've tried to simulate scenario with Postgres failing to start:
Despite the fact that now I have a fully broken cluster, cluster phase does not reflect that:
In operator logs there are endless messages regarding pod connection errors:
From my point of view, the expected behavior in such a case is to reflect the actual degraded state of the cluster in the cluster phase/conditions.
Cluster resource
No response
Relevant log output
No response
Code of Conduct