canonical / postgresql-k8s-operator

A Charmed Operator for running PostgreSQL on Kubernetes
https://charmhub.io/postgresql-k8s
Apache License 2.0
10 stars 20 forks source link

No backups are listed if the primary unit is not the leader unit in a cluster migration #713

Closed marceloneppel closed 3 weeks ago

marceloneppel commented 1 month ago

Steps to reproduce

# Deploy and relate the charms.
juju add-model stg-data-visualization
juju deploy postgresql-k8s --trust --channel 14/stable --revision 198
juju deploy s3-integrator
juju config s3-integrator bucket="canonical-postgres" path="/test-17089" region="us-east-2"
juju relate s3-integrator postgresql-k8s
juju deploy self-signed-certificates --channel latest/edge --config ca-common-name="Test CA" --base ubuntu@22.04
juju relate self-signed-certificates postgresql-k8s

# Configure the S3 credentials.
juju run s3-integrator/leader sync-s3-credentials access-key=XXX secret-key=XXX

# Create a backup.
juju run postgresql-k8s/leader create-backup --wait=1000s

# Check that the backup is listed.
juju run postgresql-k8s/leader list-backups

# Bootstrap a new controller and create a new model.
juju bootstrap microk8s micro1 && juju add-model stg-data-visualization-34

# Deploy and relate the charms in the new controller/model - use a different path in the S3 bucket.
juju deploy postgresql-k8s --trust --channel 14/stable --revision 281 -n 3
juju deploy s3-integrator
juju config s3-integrator bucket="canonical-postgres" path="/test-170891" region="us-east-2"
juju relate s3-integrator postgresql-k8s
juju deploy self-signed-certificates --channel latest/edge --config ca-common-name="Test CA" --base ubuntu@22.04
juju relate self-signed-certificates postgresql-k8s

# Configure the S3 credentials.
juju run s3-integrator/leader sync-s3-credentials access-key=XXX secret-key=XXX

# Wait for the units to settle down.

# Stop the database in the leader unit.
juju ssh --container postgresql postgresql-k8s/leader pebble stop postgresql

# Wait for the primary change, then start the database again in the leader unit.
juju ssh --container postgresql postgresql-k8s/leader pebble start postgresql

# Check that there are no backups.
juju run postgresql-k8s/leader list-backups

# Switch to the path containing the backup from the first cluster.
juju config s3-integrator path=/test-17089

# Check that there is no backups, even if there is one in that bucket/path.
juju run postgresql-k8s/leader list-backups

Expected behaviour

One backup is listed.

Actual behavior

No backups are listed.

Versions

Operating system: Ubuntu 24.04.1 LTS

Juju CLI: 3.5.3

Juju agent: 3.5.3

Charm revision: 381

microk8s: v1.30.4

Log output

Juju debug log: N/A

Additional context

N/A

syncronize-issues-to-jira[bot] commented 1 month ago

Thank you for reporting us your feedback!

The internal ticket has been created: https://warthogs.atlassian.net/browse/DPE-5580.

This message was autogenerated

marceloneppel commented 1 month ago

This happens only on cluster with more than one unit and if the primary unit is not the leader unit because some check in the charm code look for leader in some places (https://github.com/canonical/postgresql-k8s-operator/blob/f61a10b1bc08d3ff62696372c787ec96a823e585/src/backups.py#L151) and for primary in some other places (https://github.com/canonical/postgresql-k8s-operator/blob/f61a10b1bc08d3ff62696372c787ec96a823e585/src/backups.py#L444 and https://github.com/canonical/postgresql-k8s-operator/blob/f61a10b1bc08d3ff62696372c787ec96a823e585/src/backups.py#L600C27-L600C37).