Closed dtdionne closed 1 month ago
Why are you using such an old version of etcd? Use 3.5.11 or higher.
That's a great question, and i'm embarrassed to say i have no idea! Does this not install etcd on all hosts when the playbook is run?
I'm brand new at this too so forgive me...
[root@node1 postgresql_cluster]# etcd --version etcd Version: 3.5.12 Git SHA: e7b3bb6cc Go Version: go1.20.13 Go OS/Arch: linux/amd64
Please check version on all nodes.
Did i goof something in main? There weren't many options, this is on a test lab lan so no proxies. The playbook ran great until the end.
There's a slight chance someone goofed around on one of the hosts before i cloned this repo. I'll snapshot them all back to yum'd pristine, re-clone and give it another go.
I checked all 3 before snapping back and they were all running the same version of etcd, But those machine states are gone now...I figured i screwed something up.
It’s installing now, some observations…
Bone stock alma9 server needs to have the firewall disabled or configured, the first run failed because of this, I think. I disabled the firewall on all hosts and it’s gotten further this time but it was stuck checking etc health. I stepped outside on retry 5 of the minus 10 countdown and I remember something like this happening the last night. I’m an old cagey iptables guy and I’m too lazy and grumpy to even read about this new fangled firewall.
I’m pretty sure this one will fail but I’ll run the playbook again.
Once the playbook completes it throws an error when run again, also with the cluster-clear option. Something about line 5 main no such thing as ansible.
But this appears to be great work, thank you.
I’m an old cagey iptables guy
Please see automation for iptables https://github.com/vitabaks/postgresql_cluster/blob/master/vars/system.yml#L128
I think it will be useful for you.
I just got a vip error, do all the interfaces have to have the same interface name on all hosts? Cause what I have set is correct for the host I want for client connections. 192.168.77.77 ens160.
Usually, the vip_interface: "{{ ansible_default_ipv4.interface }}"
value is enough for ansible to determine the interface name for each host. \
But if you explicitly specify the interface name in a variable, then yes, in that case, it should be the same on all servers.
In any case, try to keep the servers identical.
What's causing this?
After the playbook finishes i get this error...
[root@bibble postgresql_cluster]# ansible-playbook remove_cluster.yml Traceback (most recent call last): File "/usr/local/bin/ansible-playbook", line 5, in <module> from ansible.cli.playbook import main ModuleNotFoundError: No module named 'ansible'
So I think this is resolved and actually i dont know what caused it so my guess is someone goofed around with one of the systems. Or i guess it coulda been the firewall but idk. I saw where the config says it disables firewalld for rhel but my guess is my alma9 installs arent being detected as rhel. But again, idk...im just beginning to get my feet wet here.
Thanks for the patients and hard work.
Greetings, just getting started and indeed v3alpha throws a 404. All are pristine Alma 9 installs.