The postgresql_cluster project is designed to deploy and manage high-availability PostgreSQL clusters in production environments. This solution is tailored for use on dedicated physical servers, virtual machines, and within both on-premises and cloud-based infrastructures.
This project not only facilitates the creation of new clusters but also offers support for integrating with pre-existing PostgreSQL instances. If you intend to upgrade your conventional PostgreSQL setup to a high-availability configuration, then just set postgresql_exists=true
in the inventory file. Be aware that initiating cluster mode requires temporarily stopping your existing PostgreSQL service, which will lead to a brief period of database downtime. Please plan this transition accordingly.
:trophy: Use the sponsoring program to get personalized support, or just to contribute to this project.
You have three schemes available for deployment:
This is simple scheme without load balancing (used by default).
Patroni is a template for you to create your own customized, high-availability solution using Python and - for maximum accessibility - a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes. Used for automate the management of PostgreSQL instances and auto failover.
etcd is a distributed reliable key-value store for the most critical data of a distributed system. etcd is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log. It is used by Patroni to store information about the status of the cluster and PostgreSQL configuration parameters.
What is Distributed Consensus?
To provide a single entry point (VIP) for database access is used "vip-manager".
vip-manager (_optional, if the cluster_vip
variable is specified_) is a service that gets started on all cluster nodes and connects to the DCS. If the local node owns the leader-key, vip-manager starts the configured VIP. In case of a failover, vip-manager removes the VIP on the old leader and the corresponding service on the new leader starts it there.
PgBouncer (optional, if the pgbouncer_install
variable is true
) is a connection pooler for PostgreSQL.
To use this scheme, specify with_haproxy_load_balancing: true
in variable file vars/main.yml
This scheme provides the ability to distribute the load on reading. This also allows us to scale out the cluster (with read-only replicas).
:heavy_exclamation_mark: Note: Your application must have support sending read requests to a custom port 5001, and write requests to port 5000.
HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications.
confd manage local application configuration files using templates and data from etcd or consul. Used to automate HAProxy configuration file management.
Keepalived (_optional, if the cluster_vip
variable is specified_) provides a virtual high-available IP address (VIP) and single entry point for databases access.
Implementing VRRP (Virtual Router Redundancy Protocol) for Linux. In our configuration keepalived checks the status of the HAProxy service and in case of a failure delegates the VIP to another server in the cluster.
To use this scheme, specify dcs_type: consul
in variable file vars/main.yml
This scheme is suitable for master-only access and for load balancing (using DNS) for reading across replicas. Consul Service Discovery with DNS resolving is used as a client access point to the database.
Client access point (example):
master.postgres-cluster.service.consul
replica.postgres-cluster.service.consul
Besides, it can be useful for a distributed cluster across different data centers. We can specify in advance which data center the database server is located in and then use this for applications running in the same data center.
Example: replica.postgres-cluster.service.dc1.consul
, replica.postgres-cluster.service.dc2.consul
It requires the installation of a consul in client mode on each application server for service DNS resolution (or use forward DNS to the remote consul server instead of installing a local consul client).
RedHat and Debian based distros (x86_64)
all supported PostgreSQL versions
:white_check_mark: tested, works fine: PostgreSQL 10, 11, 12, 13, 14, 15, 16
Minimum supported Ansible version: 2.11.0
This playbook requires root privileges or sudo.
Ansible (What is Ansible?)
if dcs_type: "consul", please install consul role requirements on the control node:
ansible-galaxy install -r roles/consul/requirements.yml
List of required TCP ports that must be open for the database cluster:
5432
(postgresql)6432
(pgbouncer)8008
(patroni rest api)2379
, 2380
(etcd)for the scheme "[Type A] PostgreSQL High-Availability with Load Balancing":
5000
(haproxy - (read/write) master)5001
(haproxy - (read only) all replicas)5002
(haproxy - (read only) synchronous replica only)5003
(haproxy - (read only) asynchronous replicas only)7000
(optional, haproxy stats)for the scheme "[Type C] PostgreSQL High-Availability with Consul Service Discovery (DNS)":
8300
(Consul Server RPC)8301
(Consul Serf LAN)8302
(Consul Serf WAN)8500
(Consul HTTP API)8600
(Consul DNS server)Update your operating system on your target servers before deploying;
Make sure you have time synchronization is configured (NTP).
Specify ntp_enabled:'true'
and ntp_servers
if you want to install and configure the ntp service.
Fast drives and a reliable network are the most important factors for the performance and stability of an etcd (or consul) cluster.
Avoid storing etcd (or consul) data on the same drive along with other processes (such as the database) that are intensively using the resources of the disk subsystem!
Store the etcd and postgresql data on different disks (see etcd_data_dir
, consul_data_path
variables), use ssd drives if possible.
See hardware recommendations and tuning guides.
It is recommended to deploy the DCS cluster on dedicated servers, separate from the database servers.
If you’d prefer a cross-data center setup, where the replicating databases are located in different data centers, etcd member placement becomes critical.
There are quite a lot of things to consider if you want to create a really robust etcd cluster, but there is one rule: do not placing all etcd members in your primary data center. See some examples.
Due to performance reasons, a synchronous replication is disabled by default.
To minimize the risk of losing data on autofailover, you can configure settings in the following way:
sudo apt update && sudo apt install -y python3-pip sshpass git
pip3 install ansible
git clone https://github.com/vitabaks/postgresql_cluster.git
cd postgresql_cluster/
ansible_user
, ansible_ssh_pass
or ansible_ssh_private_key_file
for your environmentnano inventory
nano vars/main.yml
proxy_env
# if required (for download packages)cluster_vip
# for client access to databases in the cluster (optional)patroni_cluster_name
postgresql_version
postgresql_data_dir
with_haproxy_load_balancing
'true'
(Type A) or 'false'
/default (Type B)dcs_type
# "etcd" (default) or "consul" (Type C)if dcs_type: "consul", please install consul role requirements on the control node:
ansible-galaxy install -r roles/consul/requirements.yml
ansible all -m ping
ansible-playbook deploy_pgcluster.yml
To deploy a PostgreSQL High-Availability Cluster with the TimescaleDB extension, you just need to add the enable_timescale
variable.
Example:
ansible-playbook deploy_pgcluster.yml -e "enable_timescale=true"
See the vars/main.yml, system.yml and (Debian.yml or RedHat.yml) files for more details.
After you successfully deployed your PostgreSQL HA cluster, you may need to scale it further. \
Use the add_pgnode.yml
playbook for this.
> This playbook does not scaling the etcd cluster or consul cluster. During the run this playbook, the new nodes will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, all the necessary **configuration files will be copied from the master server**. ###### Steps to add a new Postgres node: 1. Add a new node to the inventory file with the variable `new_node=true` 2. Run `add_pgnode.yml` playbook In this example, we add a node with the IP address 10.128.64.144 ``` [master] 10.128.64.140 hostname=pgnode01 postgresql_exists='true' [replica] 10.128.64.142 hostname=pgnode02 postgresql_exists='true' 10.128.64.143 hostname=pgnode03 postgresql_exists='true' 10.128.64.144 hostname=pgnode04 postgresql_exists=false new_node=true ``` Run playbook: ``` ansible-playbook add_pgnode.yml ```
During the run this playbook, the new balancer node will be prepared in the same way as when first deployment the cluster. But unlike the initial deployment, all necessary **configuration files will be copied from the first server specified in the inventory file in the "balancers" group**. ###### Steps to add a new balancer node: Note: Used if the `with_haproxy_load_balancing` variable is set to `true` 1. Add a new node to the inventory file with the variable `new_node=true` 2. Run `add_balancer.yml` playbook In this example, we add a balancer node with the IP address 10.128.64.144 ``` [balancers] 10.128.64.140 10.128.64.142 10.128.64.143 10.128.64.144 new_node=true ``` Run playbook: ``` ansible-playbook add_balancer.yml ```
Create new clusters from your existing backups with pgBackRest or WAL-G \ Point-In-Time-Recovery
##### Create cluster with pgBackRest: 1. Edit the `main.yml` variable file ``` patroni_cluster_bootstrap_method: "pgbackrest" patroni_create_replica_methods: - pgbackrest - basebackup postgresql_restore_command: "pgbackrest --stanza={{ pgbackrest_stanza }} archive-get %f %p" pgbackrest_install: true pgbackrest_stanza: "stanza_name" # specify your --stanza pgbackrest_repo_type: "posix" # or "s3" pgbackrest_repo_host: "ip-address" # dedicated repository host (if repo_type: "posix") pgbackrest_repo_user: "postgres" # if "repo_host" is set pgbackrest_conf: # see more options https://pgbackrest.org/configuration.html global: # [global] section - {option: "xxxxxxx", value: "xxxxxxx"} ... stanza: # [stanza_name] section - {option: "xxxxxxx", value: "xxxxxxx"} ... pgbackrest_patroni_cluster_restore_command: '/usr/bin/pgbackrest --stanza={{ pgbackrest_stanza }} --type=time "--target=2020-06-01 11:00:00+03" --delta restore' ``` example for S3 https://github.com/vitabaks/postgresql_cluster/pull/40#issuecomment-647146432 2. Run playbook: `ansible-playbook deploy_pgcluster.yml` ##### Create cluster with WAL-G: 1. Edit the `main.yml` variable file ``` patroni_cluster_bootstrap_method: "wal-g" patroni_create_replica_methods: - wal_g - basebackup postgresql_restore_command: "wal-g wal-fetch %f %p" wal_g_install: true wal_g_version: "2.0.1" wal_g_json: # config https://github.com/wal-g/wal-g#configuration - {option: "xxxxxxx", value: "xxxxxxx"} - {option: "xxxxxxx", value: "xxxxxxx"} ... wal_g_patroni_cluster_bootstrap_command: "wal-g backup-fetch {{ postgresql_data_dir }} LATEST" ``` 2. Run playbook: `ansible-playbook deploy_pgcluster.yml` ##### Point-In-Time-Recovery: You can run automatic restore of your existing patroni cluster \ for PITR, specify the required parameters in the main.yml variable file and run the playbook with the tag: ``` ansible-playbook deploy_pgcluster.yml --tags point_in_time_recovery ``` Recovery steps with pgBackRest: ``` 1. Stop patroni service on the Replica servers (if running); 2. Stop patroni service on the Master server; 3. Remove patroni cluster "xxxxxxx" from DCS (if exist); 4. Run "/usr/bin/pgbackrest --stanza=xxxxxxx --delta restore" on Master; 5. Run "/usr/bin/pgbackrest --stanza=xxxxxxx --delta restore" on Replica (if patroni_create_replica_methods: "pgbackrest"); 6. Waiting for restore from backup (timeout 24 hours); 7. Start PostgreSQL for Recovery (master and replicas); 8. Waiting for PostgreSQL Recovery to complete (WAL apply); 9. Stop PostgreSQL instance (if running); 10. Disable PostgreSQL archive_command (if enabled); 11. Start patroni service on the Master server; 12. Check PostgreSQL is started and accepting connections on Master; 13. Make sure the postgresql users (superuser and replication) are present, and password does not differ from the specified in vars/main.yml; 14. Update postgresql authentication parameter in patroni.yml (if superuser or replication users is changed); 15. Reload patroni service (if patroni.yml is updated); 16. Start patroni service on Replica servers; 17. Check that the patroni is healthy on the replica server (timeout 10 hours); 18. Check postgresql cluster health (finish). ``` **Why disable archive_command?** This is necessary to avoid conflicts in the archived log storage when archiving WALs. When multiple clusters try to send WALs to the same storage. \ For example, when you make multiple clones of a cluster from one backup. You can change this parameter using `patronictl edit-config` after restore. \ Or set `disable_archive_command: false` to not disable archive_command after restore.
I recommend that you study the following materials for further maintenance of the cluster:
To change the PostgreSQL configuration in a cluster using automation:
postgresql_parameters
variable with the desired parameter changes.
pending_restart: true
to automatically restart PostgreSQL if a parameter change requires it.config_pgcluster.yml
playbook to apply the changes.Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of through manual processes. \ GitOps automates infrastructure updates using a Git workflow with continuous integration (CI) and continuous delivery (CI/CD). When new code is merged, the CI/CD pipeline enacts the change in the environment. Any configuration drift, such as manual changes or errors, is overwritten by GitOps automation so the environment converges on the desired state defined in Git.
Once the cluster is deployed, you can use the config_pgcluster.yml
playbook to integrate with Git to manage cluster configurations. \
For example, GitHub Action (link), GitLab CI/CD (link)
Details about IaC and GitOps:
Use the update_pgcluster.yml
playbook for update the PostgreSQL HA Cluster to a new minor version (for example 15.1->15.2, and etc).
More details here
Use the pg_upgrade.yml
playbook to upgrade the PostgreSQL to a new major version (for example 14->15, and etc).
More details here
A high availability cluster provides an automatic failover mechanism, and does not cover all disaster recovery scenarios. You must take care of backing up your data yourself.
Patroni nodes are dumping the state of the DCS options to disk upon for every change of the configuration into the file patroni.dynamic.json located in the Postgres data directory. The master (patroni leader) is allowed to restore these options from the on-disk dump if these are completely absent from the DCS or if they are invalid.
However, I recommend that you read the disaster recovery guide for the etcd cluster:
I can recommend the following backup and restore tools:
Do not forget to validate your backups (for example pgbackrest auto).
Should you need to start from very beginning, use the playbook remove_cluster.yml
.
To prevent the script to be used by accident in a production environment, edit remove_cluster.yml
and remove the safety pin. Change these variables accordingly:
Run the script and all the data are gone.
ansible-playbook remove_cluster.yml
A new installation can now be made from scratch.
:heavy_exclamation_mark: Be careful not to copy this script without the safety pin to the production environment.
Join our sponsorship program to directly contribute to our project's growth and gain exclusive access to personalized support. Your sponsorship is crucial for innovation and progress. Become a sponsor today!
Support our work through GitHub Sponsors
Support our work through Patreon
Support our work through a crypto wallet:
USDT (TRC20): TSTSXZzqDCUDHDjZwCpuBkdukjuDZspwjj
Licensed under the MIT License. See the LICENSE file for details.
Vitaliy Kukharik (PostgreSQL DBA) \ vitabaks@gmail.com
Are welcome!