vitabaks / postgresql_cluster

PostgreSQL High-Availability Cluster (based on "Patroni" and DCS "etcd" or "consul"). Automating with Ansible.
MIT License
1.29k stars 352 forks source link

pgbouncer.service is not enabled #452

Closed klaudyszko closed 8 months ago

klaudyszko commented 8 months ago

Hello

Thank you for a great code for deploying postgres ha cluster. I have one question. When I'm using type B (just ha cluster, without haproxy) I assume that my cluster have to be accessible via virtual IP defined in vars/main.yml and 6432 port (pgbouncer). Am I right? If so - I'm wondering why pgbouncer didn't start after deploy?

Thank you in advance for your answer

vitabaks commented 8 months ago

all right

if you install pgbouncer then the port for accessing the database is 6432

I'm wondering why pgbouncer didn't start after deploy?

I do not know since you have not provided any logs for analysis

klaudyszko commented 8 months ago

Sure, what logs do you want to see?

For example here is an output of systemctl status for pgbouncer root@psql-standby02:~# systemctl status pgbouncer ○ pgbouncer.service - pgBouncer connection pooling for PostgreSQL Loaded: loaded (/etc/systemd/system/pgbouncer.service; disabled; vendor preset: enabled) Active: inactive (dead)

It shows on every of my three nodes. Also there is no 6432 port added: obraz

I'm probably missed something in configuration in vars or whatever, but have no idea where. Or maybe some handler should restart pgbouncer service in here (?): obraz

vitabaks commented 8 months ago

This is strange because we enable the pgbouncer service by default

try to run it manually

systemctl enable pgbouncer
systemctl start pgbouncer 
systemctl status pgbouncer

I will try to reproduce the problem to fix it

klaudyszko commented 8 months ago

Yup, I started it manually and everything works fine :) except vip-manager which didn't switch vip to a new master (after shutting down my first master node). If I turn on my node back, it works again and vip is assigned to a new master node (previous it was standby of course). I'm adding that information here, just for the record.

Cluster using haproxy works very good.

If I find something I'll let you know. Also I'll observe project/this issue, maybe you will find some resolution.

Thanks again :)

vitabaks commented 8 months ago

Have you had a similar problem with vip-manager? https://github.com/vitabaks/postgresql_cluster/issues/292

Are your servers located at a cloud provider or in your own data center?

klaudyszko commented 8 months ago

It's private DC, no cloud services.

In my case I created 3 nodes: 1 master, 2 replicas. I used type B so I set with_haproxy_load_balancing to false. I added my hosts to inventory. Vip-manager version was 2.1.0. Cluster set up, I created some databases, added some data, replication worked fine. I manually ran pgbouncer and my app see DB on 1.2.3.4:6432. My virtual IP was on master node. Perfect.

I decided to test case when my master node is down, so I turn off my master vm and wait for switching vip and promotion. One of my standby was promoted, but switch vip never happened until I turned on "old" master.

If I'm using installation with haproxy (this type used keepalived) then vip is on master node, and after turning off master node, one of standby is promoted and takes vip and db is accessible on 1.2.3.4:5000 after a few seconds.

So probably you could be right and it's bug in vip-manager or PEBKAC

vitabaks commented 8 months ago

which Linux distribution do you use?

klaudyszko commented 8 months ago

Ubuntu 22.04.3

vitabaks commented 8 months ago

@klaudyszko If you have time please test this PR: https://github.com/vitabaks/postgresql_cluster/pull/454

klaudyszko commented 8 months ago

@vitabaks sure, I'll check it and let you know. Thanks