Open Dialgatrainer02 opened 1 week ago
just having a look around and of the pid does this version still have the 32767 bug? all of my instance pids are over 32767 so that might be a reason why edit: yup my version is out of date and still has the bug
The VRRP instance is in fault state and so the VIP 192.168.0.200 is not added to eth0. If you remove the vrrp_track_process (just for testing) you should find that you can ping 192.168.0.200. See the log entries:
Jun 22 16:17:37 vault3-test Keepalived_vrrp[2719]: Failed to set/clear process event listen - errno 111 - Connection refused
Jun 22 16:17:37 vault3-test Keepalived_vrrp[2719]: (VI_1) entering FAULT state (tracked process track_vault quorum not achieved)
Jun 22 16:17:37 vault3-test Keepalived_vrrp[2719]: (VI_1) entering FAULT state
You appear to have built your own kernel (I have installed Alma Linux 9.4 in a VM and the kernel is 5.14.0). Is PROC_EVENTS enabled in your kernel? I suspect that the reason you are getting the Connection refused error is that your kernel is build without the proc_events connector.
I'm running lxc which uses the host kernel so I need to check if it's enabled or not also using a different track script worked curling vaults heath endpoint now my issue is vault not listening to the VIP but that's out of scope here
Support requests should be sent via https://groups.io/g/keepalived-users
Describe why you are unable to send the support request to the above email list Understanding why you cannot use the email list should help us improve it. cant figure out how to make a new group for my issue and general unfamiliarity with mailing lists
Describe what you need help/support for probably a config error
Details of what you would like to do with keepalived Describe in details what you would like to achieve with keepalived i have a 3 vault nodes in a cluster and i want virtual ip to point to a node so if goes down a new leader can be created without having to change vault adresses
Keepalived version Output of
keepalived -v
(a later version of keepalived might be needed).Distro (please complete the following information):
Details of any containerisation or hosted service (e.g. AWS) running inside lxc on proxmox
Configuration file: Full copy of your configuration file, obfuscated if necessary to protect passwords and IP addresses leader node
(they are generated with ansible ) Notify and track scripts If any notify or track scripts are in use, please provide copies of them tracking the vault process to see if its died
System Log entries Full keepalived system log entries from when keepalived started, if applicable
vault is definatly running so im unsure as to why its faulting ps aux sinppet
secondary node (also running a vault instance)
Additional context i have a 3rd node which is basically identical to the second as they all use the same config with an jinja2 template i can access each vault individually but cannot access vault through the vip ping vip
keepalived is definitely running