Closed alemacci closed 1 year ago
The integration is inspired by the work of Federico Campoli at PGDAY 2021 (you can find the slides here https://pgday.ru/presentation/298/60f04d9ae4105.pdf) and evolved with the new features like TLS connections.
First of all, we have to know where to store our repository. It can be any supported type, but the most valuable IMHO are: posix - Posix-compliant file systems (it can be a directly attached storage or a NAS mount) s3 - AWS Simple Storage Service (S3 bucket)
The repository can be configured with or without a backup server (repo-host). When using a backup server, the repository must be reachable from all pg_cluster nodes.
Since we have a meta node that can be used as a repo-host, pgBackRest will be able to determine which member is the primary (or a standby if you want to take the backup from a standby node).
We will also introduce the https://github.com/pgstef/check_pgbackrest script. We will use it to check&monitor backups.
We have lot of configurable options in pigsty.yml
#-----------------------------------------------------------------
# PGBACKREST
#-----------------------------------------------------------------
pgbackrest_enabled: false # setup pgbackrest?
pgbackrest_clean: false # whether remove existing pgbackrest data
pgbackrest_log_dir: /var/log/pgbackrest # pgbackrest log dir, default is /var/log/pgbackrest
pgbackrest_spool_dir: /pg/spool # path where transient data is stored
pgbackrest_protocol: ssh # host protocol type: ssh|tls
pgbackrest_tls_port: 8432 # tls server port, default is 8432
pgbackrest_type: posix # repository type: posix|s3
pgbackrest_s3_endpoint: sss.pigsty # s3 repository endpoint
pgbackrest_s3_port: 9000 # s3 storage port
pgbackrest_s3_bucket: pgbackrest # s3 repository bucket
pgbackrest_s3_key: pgbackrest # s3 access key
pgbackrest_s3_secret: pigsty # s3 secret key
pgbackrest_s3_region: eu-west-3 # s3 repository region
pgbackrest_cipher_type: aes-256-cbc # cipher used to encrypt the repository: none|aes-256-cbc
pgbackrest_cipher_pass: KPqrcMD5N2MOnTUiJOg1DeEmMHUu1zYw6yKL4sBEPfGsJiq00AYW3Z2t0eNC4n+1 # GENERATE with openssl rand -base64 48
pgbackrest_stanza: main # stanza name
pgbackrest_compression_type: lz4 # file compression type: none|bz2|gz|lz4|zst
pgbackrest_retention_type: time # retention type for full backups: count|time
pgbackrest_retention_full: 14 # full backup retention count/time
pgbackrest_retention_diff: 3 # number of differential backups to retain
pgbackrest_from_standby: false # backup from the standby nodes
pgbackrest_archive_async: true # enables asynchronous operation for the archive-push and archive-get commands
Some notes:
pg_dbsu_ssh_exchange: true
Maybe a local repo on each primary instance could be sufficient for v1.6.0?
The backup plan could lead to a huge change worth a v2.0 release.
Some rough thoughts on this version's implementation:
Init repo on all instances but only use the primary instances.
WAL Archive on the primary instance.
Nothing special on meta nodes
e.g., no dedicated backup service on meta. We can leave that to S3.
Cope with Patroni Failover / Switchover
Backup policy support
node_crontab
for primary instance backupSome thoughts on the future (e.g. ver 2.0.0)
PITR
pgsql-restore.yml -e time?....
pg_last_committed_xact
monitoring facility to help determine which point to use.use s3 compatible service as an advanced option
monitoring with prometheus & grafana
nagios would be weird since we already have Prometheus. The metrics can be fetched via pgbackrest info --format=json
, exposed via a stored procedure monitor.pgbr_info
with security definer. and scraped by pg_exporter
with a new collector.
Minio Infra support
/www/pigsty/docker
for docker imagespg_exporter
, minio
, bytebase
, pgadmin
, pgweb
minio.yml
The 'nagios' name in the check_pgbackrest package is there because the script had its first implementation on nagios, but currently it can be used in command-line to extend pgbackrest info capabilities (most usefuls are WAL checks).
For that sake, a function named pgbr-check has been created:
function pgbr-check() {
if [ ! -r /etc/pgbackrest.conf ]; then
echo "error: pgbackrest config not found"
return 1
else
local stanza=$(grep -o '\[[^][]*]' /etc/pgbackrest.conf | tail -1 | sed 's/.*\[\([^]]*\)].*/\1/')
check_pgbackrest --stanza=$stanza --service=retention --output=human && printf '\n' && check_pgbackrest --stanza=$stanza --service=archives --output=human
fi
This is an example of an automated mail sent from a repo-server each time a backup finishes. Backups are scheduled everyday: full on sunday/wednesday, incremental on remaining days.
00 02 * * 0,3 postgres pgbackrest --stanza=$stanza --type=full backup &> /dev/null
00 02 * * 1-2,4-6 postgres pgbackrest --stanza=$stanza --type=incr backup &> /dev/null
Retention is set as follow:
pgbackrest_retention_type: time # retention type for full backups: count|time
pgbackrest_retention_full: 14 # full backup retention count/time
Service : BACKUPS_RETENTION
Returns : 0 (OK)
Message : backups policy checks ok
Long message : full=4
Long message : diff=0
Long message : incr=9
Long message : latest_bck=20221102-020003F_20221103-020003I
Long message : latest_bck_type=incr
Long message : latest_bck_age=1s
Long message : oldest_bck=20221019-020003F
Long message : oldest_bck_age=2w 1d 51m6s
Service : WAL_ARCHIVES
Returns : 0 (OK)
Message : 3631 unique WAL archived
Message : latest archived since 3s
Long message : latest_archive_age=3s
Long message : num_unique_archives=3631
Long message : min_wal=000000010000007200000055
Long message : max_wal=000000010000008000000083
Long message : latest_archive=000000010000008000000083
Long message : latest_bck_archive_start=000000010000008000000083
Long message : latest_bck=20221102-020003F_20221103-020003I
Long message : latest_bck_type=incr
Long message : oldest_archive=000000010000007200000055
Long message : oldest_bck_archive_start=000000010000007200000055
Long message : oldest_bck=20221019-020003F
Long message : oldest_bck_type=full
Let's talk about the other considerations:
WAL Archive on the primary instance. This is the standard behaviour
Init repo on all instances but only use the primary instances. Nothing special on meta nodes (e.g., no dedicated backup service on meta. We can leave that to S3.) node_crontab for primary instance backup I think we must change the point of view. This is ofc doable, but suppose that you have a classic 3-node configuration without a dedicated backup repo on meta. If for some reason, you have to bring offline the primary node for maintenance and promote a standby, you have to make sure the repository is accessible from all nodes, and manually switch the backup schedules to the new primary. The same thing applies when using dedicated NFS mounts. Basically, there's everytime a single node only that must have the control on backup tasks.
Since we have a meta node that act as a consul server for consul agents on pgsql nodes, why don't use it as backup repo host also? I see a lot of advantages:
Another thing is there are some reality (such as healthcare companies) that are afraid to use AWS cloud services like S3. The main reasons are:
Backup policy support parameters for retension policy default retention policy for a day or a week? This is configurabile with:
pgbackrest_retention_type: time # retention type for full backups: count|time
pgbackrest_retention_full: 14 # full backup retention count/time
pgbackrest_retention_diff: 3 # number of differential backups to retain
'count' mode specifies how many full/diff backups are retained. When 'time' is used, it means how many days.
Btw, what do you think about introducing a _pgbackrest_repoonly parameter that works with s3 and satisfies this? Init repo on all instances but only use the primary instances. Nothing special on meta nodes (e.g., no dedicated backup service on meta. We can leave that to S3.)
Btw, what do you think about introducing a pgbackrest_repo_only parameter that works with s3 and satisfies this? Init repo on all instances but only use the primary instances.
That's a good idea! maybe we can use this as the default option. Since it takes no assumption of any other existing infrastructure. It's quite handy that the backup system is autonomous.
Another thing is there is some reality (such as healthcare companies) that are afraid to use AWS cloud services like S3. The main reasons are:
I'm not planning to use S3, too. The service in my mind is Minio
. Which is S3 compatible.
We can set up a MINIO service in the future on meta nodes. Or deploy dedicated Minio (S3 Compatible) services if got any spare high-volume nodes dedicated to Backup server. The rough idea is here: https://github.com/Vonng/pigsty/issues/177
One thing that concerns me is the release schedule. We have the current v1.6.0 thoroughly tested. It would be nice to add a local pgbackrest repo implementation and leave advanced mode for the next release. I think this may be worth a major release version change (2.0.0)
For this release, we can introduce a new parameter pg_backup_method: local
which init pgbackrest repo on all instances. and archive=on
on the primary. we can run cron tasks on the primary to actually use that repo (e.g. pg-role
check first, run backup if it's primary)
I've simplified the current API design for the current release. Still, WIP, let me know if you have any thoughts about it~.
#-----------------------------------------------------------------
# PGBACKREST
#-----------------------------------------------------------------
pgbackrest_enabled: false # setup pgbackrest?
pgbackrest_clean: false # whether remove existing pgbackrest data
pgbackrest_log_dir: /var/log/pgbackrest # pgbackrest log dir, default is /var/log/pgbackrest
pgbackrest_protocol: ssh # host protocol type: ssh|tls
pgbackrest_tls_port: 8432 # tls server port, default is 8432
pgbackrest_type: posix # repository type: posix|s3
pgbackrest_s3_endpoint: sss.pigsty # s3 repository endpoint
pgbackrest_s3_port: 9000 # s3 storage port
pgbackrest_s3_bucket: pgbackrest # s3 repository bucket
pgbackrest_s3_key: pgbackrest # s3 access key
pgbackrest_s3_secret: pigsty # s3 secret key
pgbackrest_s3_region: eu-west-3 # s3 repository region
pgbackrest_cipher_type: aes-256-cbc # cipher used to encrypt the repository: none|aes-256-cbc
pgbackrest_cipher_pass: KPqrcMD5N2MOnTUiJOg1DeEmMHUu1zYw6yKL4sBEPfGsJiq00AYW3Z2t0eNC4n+1 # GENERATE with openssl rand -base64 48
pgbackrest_stanza: main # stanza name
pgbackrest_compression_type: lz4 # file compression type: none|bz2|gz|lz4|zst
pgbackrest_retention_type: time # retention type for full backups: count|time
pgbackrest_retention_full: 14 # full backup retention count/time
pgbackrest_retention_diff: 3 # number of differential backups to retain
pgbackrest_from_standby: false # backup from the standby nodes
pgbackrest_archive_async: true # enables asynchronous operation for the archive-push and archive-get commands
kept
pgbackrest_enabled: false # setup pgbackrest?
pgbackrest_clean: false # whether remove existing pgbackrest data
pgbackrest_log_dir: /var/log/pgbackrest # pgbackrest log dir, default is /var/log/pgbackrest
merged
pgbackrest_retention_type: time # retention type for full backups: count|time
pgbackrest_retention_full: 14 # full backup retention count/time
pgbackrest_retention_diff: 3 # number of differential backups to retain
pgbackrest_type: posix # repository type: posix|s3
pgbackrest_s3_endpoint: sss.pigsty # s3 repository endpoint
pgbackrest_s3_port: 9000 # s3 storage port
pgbackrest_s3_bucket: pgbackrest # s3 repository bucket
pgbackrest_s3_key: pgbackrest # s3 access key
pgbackrest_s3_secret: pigsty # s3 secret key
pgbackrest_s3_region: eu-west-3 # s3 repository region
pgbackrest_cipher_type: aes-256-cbc # cipher used to encrypt the repository: none|aes-256-cbc
pgbackrest_cipher_pass: KPqrcMD5N2MOnTUiJOg1DeEmMHUu1zYw6yKL4sBEPfGsJiq00AYW3Z2t0eNC4n+1 # GENERATE with openssl rand -base64 48
We can leave pgbackrest repo-
definition to one parameter, since it's really complicated (the user may want to use something like Minio, GCP, AWS S3, Azure, etc...).
For example, the default local host backup dir repo could be defined with:
pgbackrest_repo: | # pgbackrest backup repo config
repo1-path=/pg/backup/
repo1-retention-full-type=time
repo1-retention-full=14
repo1-retention-diff=3
default
we can just use default values instead of a configurable entry.
pgbackrest_spool_dir: /pg/spool # use constant: /pg/tmp
pgbackrest_stanza: main # use parameter: {{ pg_cluster }}
pgbackrest_archive_async: true # always use async archive
pgbackrest_compression_type: lz4 # always use lz4 since it's really fast
discard
Leave that to future implementation
pgbackrest_from_standby: false # backup from the standby nodes
pgbackrest_protocol: ssh # host protocol type: ssh|tls
pgbackrest_tls_port: 8432 # tls server port, default is 8432
let me know if you have any thoughts about it
It seems very promising. I'll give an eye today :)
The current implementation is working quite well, altought I'm not sure if it's really necessary to have wal archiving on meta nodes (Do we really need PITR on metas? Isn't a daily pg_dump enough?)
I can work on pgbackrest_repo_mode
for 2.0.0. Just let me know :)
Currently we have pgbackrest_repo_mode = local
Maybe we can start polishing pgbackrest with a pgbackrest_repo_mode = minio
pgbackrest_repo_mode: local # local,infra,minio,s3,...
# local: use local filesystem on primary as repo storage
# minio: use dedicate MinIO as repo storage
# infra: use special TLS pgbackrest server on infra nodes?
As for PITR on infra CMDB, we treat cmdb as a standard PGSQL Cluster now, so its default behavior is to use a local pgbackrest repo as others.
We can disable that by setting pgbackrest_enabled = false
If minio
not work well, maybe we can use Pgbackrest TLS Server on Infra Nodes. and make that a part of role infra
rather than pgsql
;)
Working on it...
I will push some work soon
Some thoughts on WIP:
hosts: all
directive. Can you take a look on it @Vonng please?pgbackrest_repo_mode == 'local'
, repo1-path is forcely set to /pg/backup, alias of {{ pg_backup_dir }}
, which is a softlink to "{{ pg_fs_bkup }}/postgres/{{ pg_cluster }}-{{ pg_version }}"
. The main backup location is controlled by pg_fs_bkup variable. Then, if we choose to move the backup path, we can act on pg_fs_bkup
var. This will prevent decoupling the repo1-path from pg_fs_bkup path. In this setup, it's essential sharing pg_fs_bkup location between _pgcluster nodes;pgbackrest_repo_mode == 'minio'
, mcli command will take care of following tasks:
pgbackrest_clean == true
;pgbackrest_clean == true
, stanza-create & backup commands will fail due to mismatch error, i.e:
ERROR: [028]: backup and archive info files exist but do not match the database
ERROR: [051]: PostgreSQL version 15, system-id 7176594066682380834 do not match stanza version 15, system-id 7176592254083850830
pgbackrest_s3_bucket: pgbackrest # MinIO/s3 repository bucket
pgbackrest_s3_key: pgbackrest # MinIO/s3 access key
pgbackrest_s3_secret: pgbackrest # MinIO/s3 secret key
pgbackrest_s3_region: eu-west-3 # MinIO/s3 repository region
Pushed https://github.com/Vonng/pigsty/commit/824481691e42385742ee2adcfab81b28c4a0d2f6 Let me know your thoughts, thanks :)
Although MinIO integration is working well in different situations, there is a strange behavior when I try to insert the MINIO playbook inside INSTALL playbook, between 'ETCD INIT' and 'PGSQL INIT' playbooks: when minio finishes his tasks, the next playbook (PGSQL) will work on admin_ip only, ignoring hosts: all directive. Can you take a look on it @Vonng please?
I've encounter this several times, I'll see a way through it.
When pgbackrest_repo_mode == 'local', repo1-path is forcely set to /pg/backup, alias of {{ pg_backup_dir }}, which is a softlink to "{{ pg_fs_bkup }}/postgres/{{ pg_cluster }}-{{ pg_version }}". The main backup location is controlled by pg_fs_bkup variable. Then, if we choose to move the backup path, we can act on pg_fs_bkup var. This will prevent decoupling the repo1-path from pg_fs_bkup path. In this setup, it's essential sharing pg_fs_bkup location between pg_cluster nodes;
Since we don't use manual WAL archive & base backup, we can overhaul pg_fs_backup
usage and change backup related FHS.
The rest looking great to me.
I'm occupied in 2 days and will be back to test & polish pgbackrest impl asap. ;)
Since we don't use manual WAL archive & base backup, we can overhaul
pg_fs_backup
usage and change backup related FHS.
Added back configurable local repo path https://github.com/Vonng/pigsty/commit/5c7fa8c78a3ee5912fc580db343835ee537dd309
Furthermore, I added the pgbackrest_deployment_mode
, actually 'local' on ./roles/pgsql/defaults/main.yml
Each deployment mode (local or infra) can be a filesystem 'fs' or a 'minio' repo type.
https://github.com/Vonng/pigsty/commit/72730953f99f2fc25012ab7e4d328dc6e3069f1b
Working on infra mode...
I'm moving the Minio Provisioning tasks into role minio
e.g:
minio_alias: sss # alias name for local minio deployment
minio_buckets: # minio bucket list
- { name: pgsql }
- { name: infra }
- { name: redis }
minio_users: # minio user list
- { access_key : dba , secret_key: S3User.DBA, policy: consoleAdmin }
- { access_key : patroni , secret_key: S3User.Patroni, policy: readonly }
- { access_key : pgbackrest , secret_key: S3User.Pgbackrest, policy: readwrite }
And I was trying to find an API design that covers all pgbackrest repo definition possibilities, While providing simple default values or switch options. A rough idea would be :
pgbackrest_method: minio # pgbackrest repo method: posix,minio,[user-defined...]
pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
posix: # default pgbackrest repo with local posix fs
path: /pg/backup # local backup directory, `/pg/backup` by default
retention_full_type: count # retention full backups by count
retention_full: 1 # keep 1, at most 2 full backup when using local fs repo
minio: # optional minio repo for pgbackrest
type: s3 # minio is s3-compatible, so s3 is used
s3_endpoint: sss.pigsty # minio endpoint domain name, `sss.pigsty` by default
s3_region: us-east-1 # minio region, us-east-1 by default, useless for minio
s3_bucket: pgsql # minio bucket name, `pgsql` by default
s3_key: pgbackrest # minio user access key for pgbackrest
s3_key_secret: S3User.Pgbackrest # minio user secret key for pgbackrest
s3_uri_style: path # use path style uri for minio rather than host style
path: /backup/${pg_cluster} # minio backup path, default is `/backup/${pg_cluster}`
storage_port: 9000 # minio port, 9000 by default
storage_ca_file: /etc/pki/ca.crt # minio ca file path, `/etc/pki/ca.crt` by default
bundle: y # bundle small files into a single file
cipher_type: aes-256-cbc # enable AES encryption for remote backup repo
cipher_pass: ${pg_cluster}.Pgbackrest # AES encryption password, ${pg_cluster} will be replaced
retention_full_type: time # retention full backup by time on minio repo
retention_full: 14 # keep full backup for last 14 days
A rough idea, still WIP.
I had a similar idea.
I think this is a nice approach.
Another one can be decoupling MinIO from s3, letting the user choose another local/remote repo when preferred. MinIO infra repo will be linked with existing local infra vars and keep essential things configurable, for example:
pgbackrest_method: minio # pgbackrest repo method: posix,minio,[user-defined...]
pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
posix: # default pgbackrest repo with local posix fs
path: /pg/backup # local backup directory, `/pg/backup` by default
retention_full_type: count # retention full backups by count
retention_full: 1 # keep 1, at most 2 full backup when using local fs repo
minio: # optional minio repo for pgbackrest
#type: s3 # minio is s3-compatible, so s3 is used --> DEFAULT on pgbackrest.conf
#s3_endpoint: sss.pigsty # minio endpoint domain name, `sss.pigsty` by default --> LINKED to {{ minio_domain }}
#s3_region: us-east-1 # minio region, us-east-1 by default, useless for minio --> STATIC on pgbackrest.conf (must be declared anyway to work)
#s3_bucket: pgsql # minio bucket name, `pgsql` by default --> LINKED to {{ minio_alias }}
#s3_key: pgbackrest # minio user access key for pgbackrest --> LINKED to pgbackrest's {{ access_key }}
#s3_key_secret: S3User.Pgbackrest # minio user secret key for pgbackrest --> LINKED to pgbackrest's {{ secret_key }}
#s3_uri_style: path # use path style uri for minio rather than host style --> DEFAULT on pgbackrest.conf
path: /backup/${pg_cluster} # minio backup path, default is `/backup/${pg_cluster}`
#storage_port: 9000 # minio port, 9000 by default --> LINKED to {{ minio_port }}
#storage_ca_file: /etc/pki/ca.crt # minio ca file path, `/etc/pki/ca.crt` by default --> DEFAULT on pgbackrest.conf
bundle: y # bundle small files into a single file
cipher_type: aes-256-cbc # enable AES encryption for remote backup repo
cipher_pass: ${pg_cluster}.Pgbackrest # AES encryption password, ${pg_cluster} will be replaced
retention_full_type: time # retention full backup by time on minio repo
retention_full: 14 # keep full backup for last 14 days
s3: # optional minio repo for pgbackrest
#type: s3 # minio is s3-compatible, so s3 is used --> DEFAULT
s3_endpoint: sss.somewhere # s3 endpoint domain name, `sss.pigsty` by default
s3_region: us-east-1 # s3 region, us-east-1 by default, useless for minio
s3_bucket: pgsql # s3 bucket name, `pgsql` by default
s3_key: pgbackrest # s3 user access key for pgbackrest
s3_key_secret: S3User.Pgbackrest # s3 user secret key for pgbackrest
s3_uri_style: path # use path style uri for minio rather than host style
path: /backup/${pg_cluster} # s3 backup path, default is `/backup/${pg_cluster}`
storage_port: 9000 # s3 port, 9000 by default
s3_verify_tls: n # s3 certificate verify
bundle: y # bundle small files into a single file
cipher_type: aes-256-cbc # enable AES encryption for remote backup repo
cipher_pass: ${pg_cluster}.Pgbackrest # AES encryption password, ${pg_cluster} will be replaced
retention_full_type: time # retention full backup by time on minio repo
retention_full: 14 # keep full backup for last 14 days
Also, what do you think about introducing a 'cleanup bucket if existing' job when provisioning MinIO (instead of cleaning in pgsql playbook; useful when testing), such as
minio_buckets: # minio bucket list
- { name: pgsql , cleanup_existing: true }
- { name: infra , cleanup_existing: false }
- { name: redis , cleanup_existing: false }
and
- name: cleanup minio buckets if existing
tags: minio_bucket
when: item.cleanup_existing|bool
become: no
run_once: yes
delegate_to: '{{ admin_ip }}'
shell: mcli rm --recursive --force {{ minio_alias }}/{{ item.name }}
with_items: '{{ minio_buckets }}'
That's great, this API brings simplicity without suffering any repo config coverage.
We can have two default options: local
& minio
, and leave s3
and infra
to docs for deep customization.
As for cleanup bucket during provision, maybe it's too risky? we can operate on path level.
delete a dir
rather than destroy a bucket.
we can use pgbackrest stanza-delete in role pg_remove
? or rm -r
sss/pgsql/backup/
/pg/bin/pg-backup
: run on alive primary (arg1: full/diff/incr)/pg/bin/pg-pitr
: pitr to sepecific (lsn/time/xid/backupset)As for cleanup bucket during provision, maybe it's too risky? we can operate on path level. delete a
dir
rather than destroy a bucket.
mcli rm
doesn't destroy bucket, mcli rb
does. It's like doing rm -r sss/pgsql/*
.
we can use pgbackrest stanza-delete in role
pg_remove
? orrm -r
sss/pgsql/backup/when removing the primary
That's great. It probably requires further commands such as
pgbackrest --stanza={{ pg_cluster }} stop && pgbackrest --stanza={{ pg_cluster }} stanza-delete --force
but it should work.
Maybe we can consider some other default value for path?
Having path: /backup/${pg_cluster}
will translate into a directory tree of
sss/pgsql/backup/pg-meta-15/archive/pg-meta
sss/pgsql/backup/pg-meta-15/backup/pg-meta
which I found it a bit redundant.
Updates:
use unified pgbackrest repo and same password
Now redundant path has been eliminated by using same repo path.
sss/pgsql/pgbackrest/archive/pg-meta
sss/pgsql/pgbackrest/backup/pg-meta
sss/pgsql/pgbackrest/archive/pg-test
sss/pgsql/pgbackrest/backup/pg-test
and backup are removed with pgbackrest stop
and pgbackrest stanza-delete
if [[ -f /etc/pgbackrest/pgbackrest.conf ]]; then
pgbackrest --stanza={{ pg_cluster }} --force stop;
pgbackrest --stanza={{ pg_cluster }} --force stanza-delete;
fi
Besides, pgbackrest are added as default wal archive target, and as an optional create_replica_method:
pgbackrest:
command: /usr/bin/pgbackrest --stanza={{ pg_cluster }} --delta
restore
keep_data: true
no_params: true
no_master: true
Seems pgbackrest implementation is good enough, I'll mark this as resolved ;)
pgBackrest is a handy tool that provides reliable backup/restore. It integrates very well with Patroni cluster.
Reference: https://pgbackrest.org/