Closed klaudyszko closed 8 months ago
all right
if you install pgbouncer then the port for accessing the database is 6432
I'm wondering why pgbouncer didn't start after deploy?
I do not know since you have not provided any logs for analysis
Sure, what logs do you want to see?
For example here is an output of systemctl status for pgbouncer
root@psql-standby02:~# systemctl status pgbouncer ○ pgbouncer.service - pgBouncer connection pooling for PostgreSQL Loaded: loaded (/etc/systemd/system/pgbouncer.service; disabled; vendor preset: enabled) Active: inactive (dead)
It shows on every of my three nodes. Also there is no 6432 port added:
I'm probably missed something in configuration in vars or whatever, but have no idea where. Or maybe some handler should restart pgbouncer service in here (?):
This is strange because we enable the pgbouncer service by default
try to run it manually
systemctl enable pgbouncer
systemctl start pgbouncer
systemctl status pgbouncer
I will try to reproduce the problem to fix it
Yup, I started it manually and everything works fine :) except vip-manager which didn't switch vip to a new master (after shutting down my first master node). If I turn on my node back, it works again and vip is assigned to a new master node (previous it was standby of course). I'm adding that information here, just for the record.
Cluster using haproxy works very good.
If I find something I'll let you know. Also I'll observe project/this issue, maybe you will find some resolution.
Thanks again :)
Have you had a similar problem with vip-manager? https://github.com/vitabaks/postgresql_cluster/issues/292
Are your servers located at a cloud provider or in your own data center?
It's private DC, no cloud services.
In my case I created 3 nodes: 1 master, 2 replicas. I used type B so I set with_haproxy_load_balancing to false. I added my hosts to inventory. Vip-manager version was 2.1.0. Cluster set up, I created some databases, added some data, replication worked fine. I manually ran pgbouncer and my app see DB on 1.2.3.4:6432. My virtual IP was on master node. Perfect.
I decided to test case when my master node is down, so I turn off my master vm and wait for switching vip and promotion. One of my standby was promoted, but switch vip never happened until I turned on "old" master.
If I'm using installation with haproxy (this type used keepalived) then vip is on master node, and after turning off master node, one of standby is promoted and takes vip and db is accessible on 1.2.3.4:5000 after a few seconds.
So probably you could be right and it's bug in vip-manager or PEBKAC
which Linux distribution do you use?
Ubuntu 22.04.3
@klaudyszko If you have time please test this PR: https://github.com/vitabaks/postgresql_cluster/pull/454
@vitabaks sure, I'll check it and let you know. Thanks
Hello
Thank you for a great code for deploying postgres ha cluster. I have one question. When I'm using type B (just ha cluster, without haproxy) I assume that my cluster have to be accessible via virtual IP defined in vars/main.yml and 6432 port (pgbouncer). Am I right? If so - I'm wondering why pgbouncer didn't start after deploy?
Thank you in advance for your answer