vitabaks / postgresql_cluster

PostgreSQL High-Availability Cluster (based on "Patroni" and DCS "etcd" or "consul"). Automating with Ansible.
MIT License
1.29k stars 352 forks source link

unable to SSH into your server after a reboot #411

Closed emanfeah closed 2 months ago

emanfeah commented 10 months ago

hello agian ,

i don't know why when i reboot a replica server I can't ssh it give me .

22 port refused after , maybe because something in playbook

vitabaks commented 10 months ago

Hmm, I do not have such a problem.

try a connection with the ssh -v option to get detailed output and a trace that can help identify possible problems.

Can you connect to the server in other ways? For example, a console in a cloud platform or your hypervisor. To check the system logs, the status of the sshd service.

By the way, have you included the firewall_enabled_at_boot variable for configuring iptables?

emanfeah commented 10 months ago
`firewall_enabled_at_boot: true  # or 'true' for configure firewall (iptables)

firewall_allowed_tcp_ports_for:
  master:
    - "8008"
    - "5432"
    - "6432"
  replica:
    - "8008"
    - "5432"
    - "6432"
  pgbackrest: []
  postgres_cluster:
    - "{{ ansible_ssh_port | default(22) }}"
    - "{{ postgresql_port }}"
    - "{{ pgbouncer_listen_port }}"
    - "8008"
    - "19999"  # Netdata
#    - "10050"  # Zabbix agent
#    - ""
  etcd_cluster:
    - "{{ ansible_ssh_port | default(22) }}"
    - "2379"  # ETCD port
    - "2380"  # ETCD port
#    - ""`

its that what you mean ..

vitabaks commented 10 months ago

Yes judging by this example you have enabled firewall

but you have a rule for ssh (ansible_ssh_port or port 22) for postgres_cluster group servers, so there should be no problems with blocking ssh access

If you can connect to the server (via the management console), check the result of the iptables -L command as well as the system logs

And firewall service status

sudo systemctl status firewall
fatmaAliGamal commented 10 months ago

can you check public ip of servers if it changes or not after reboot

vitabaks commented 9 months ago

If you are talking about a public IP address, then make sure that you have a permanent one, otherwise it may change every time you restart the server.

Also make sure that you have specified private IP addresses in inventory so that the cluster components listen to private addresses and not public ones.

emanfeah commented 9 months ago

Also make sure that you have specified private IP addresses in inventory so that the cluster components listen to private addresses and not public ones.

could you please give me more detail...?

vitabaks commented 9 months ago

See README https://github.com/vitabaks/postgresql_cluster#deployment-quick-start

Specify (non-public) IP addresses and connection settings (ansible_user, ansible_ssh_pass or ansible_ssh_private_key_file for your environment

Comment from inventory file

The specified ip addresses will be used to listen by the cluster components.

emanfeah commented 9 months ago

yes .. i specified and work fine for me but the problem is i can't get ssh :22 after reboot

vitabaks commented 9 months ago

please give me more detail

Use of Internal and External IP Addresses in Ansible Inventory

It has been identified that there may be some confusion when it comes to using both internal and external IP addresses within the Ansible inventory. Here is some clarification:

In Ansible, the inventory_hostname represents the hostname within your configuration. This value can be referenced within your Ansible playbooks and roles. On the other hand, ansible_host is used to specify the IP address or domain name where Ansible should establish a connection to the remote host.

When setting these values in the format private_ip_address ansible_host=public_ip_address, Ansible will:

Use the private_ip_address internally within its playbooks and roles (the IP addresses specified as inventory_hostname will be used by the cluster components for listening), and connect to the host via the public_ip_address.

Example:

[etcd_cluster]
10.128.64.140 ansible_host=34.72.80.145
10.128.64.142 ansible_host=35.123.45.67
10.128.64.143 ansible_host=36.192.89.10

This configuration is useful when the cluster components need to communicate over internal IP addresses, but Ansible commands need to be run over the public IP address.

UPD:

Inventory: Add a comment about using public IP addresses - https://github.com/vitabaks/postgresql_cluster/commit/4c197115a44b1e615132c978ebe096c9a7acf8fd

vitabaks commented 9 months ago

i can't get ssh :22 after reboot

via public IP?

emanfeah commented 9 months ago

i don't have a public ip only use a private ip


also, where i get or find a inventory_hostname and ansible_host ?

if dcs_exists: false and dcs_type: "etcd" [etcd_cluster] # recommendation: 3, or 5-7 nodes 10.128.64.140 // used private Ip 10.128.64.142 // used private Ip 10.128.64.143 // used private Ip

also i do ssh by private ip using jump server

vitabaks commented 9 months ago

Ok. Good.

If you can connect to the server (via the management console), check the result of the iptables -L -v command as well as the system logs

/var/log/auth.log
/var/log/syslog
/var/log/kern.log

And firewall service status

sudo systemctl status firewall
vitabaks commented 3 months ago

@emanfeah Is the problem still relevant?