redhat-cop / aap_utilities

Ansible Collection for automated deployment of AAP and other objects for general use
https://galaxy.ansible.com/infra/aap_utilities
GNU General Public License v3.0
74 stars 45 forks source link

[ENH] add code to make all hosts known to each other to avoid issues at deployment time #55

Closed ericzolf closed 1 month ago

ericzolf commented 2 years ago

Assuming we create the hosts fully automatically, they aren't known to each other, and the setup.sh can't work properly.

ericzolf commented 2 years ago

sudo ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh is only a partial solution because the setup playbooks also call rsync which, obviously, ignores the environment variable but relies on SSH. Anyway, I'd like a more generic solution which could be reused for other purposes.

djdanielsson commented 1 year ago

is this still an issue?

ericzolf commented 1 year ago

I still think so.

Tompage1994 commented 1 year ago

I wonder what this should look like?

Perhaps a role which can optionally be included which sets up /etc/hosts entries for each of the nodes?

djdanielsson commented 2 months ago

I wonder if we should just add the ignore host check to ansible.cfg

anderpups commented 1 month ago

As a workaround, I used the below as part of my preflight tasks.

- name: Create 'aap_install_user' for installer to use
  ansible.builtin.user:
    name: "{{ aap_install_user }}"
    comment: "{{ aap_install_user }} orchestrator user"
    home: "/home/{{ aap_install_user }}"
    groups: "wheel"
    password: "{{ aap_install_user_password }}"

- name: Get the aap_install_user's password expiry
  ansible.builtin.shell: >-
    set -o pipefail &&
    chage -l {{ aap_install_user }} | sed -n "2p" | sed "s/.*: //g"
  when: not ansible_check_mode
  register: aap_install_user_expiry
  changed_when: no

- name: Set the aap_install_user password to never expire
  ansible.builtin.command: "chage -M -1 {{ aap_install_user }}"
  when: aap_install_user_expiry.stdout != "never"

- name: Allow passwordless sudo for {{ aap_install_user }}
  ansible.builtin.template:
    src: install_user_sudoers_file.j2
    dest: "/etc/sudoers.d/{{ aap_install_user }}"
    mode: "600"
    owner: root
    group: root

- name: Grab ssh host_key from all nodes
  ansible.builtin.slurp:
    src: /etc/ssh/ssh_host_ecdsa_key.pub
  register: ssh_host_key

- name: Do stuff on the orchestrator_node
  when: orchestrator_node is defined
  block:
    - name: Verify orchestrator_node .ssh directory exists
      ansible.builtin.file:
        path: "/root/.ssh"
        state: directory
        owner: root
        group: root
        mode: "0700"

    - name: Generate a new ssh public private key pair on the orchestrator_node
      community.crypto.openssh_keypair:
        path: /root/.ssh/id_rsa
        type: rsa
        size: 4096
        state: present
        comment: "ansible automation platform installer node"

    - name: Grab ssh public key from control node
      ansible.builtin.slurp:
        src: /root/.ssh/id_rsa.pub
      register: ssh_public_key

    - name: Install sshd public keys for all hosts to install node known_hosts
      ansible.builtin.known_hosts:
        path: /root/.ssh/known_hosts
        name: "{{ item }}"
        key: "{{ item }},{{ hostvars[item].ansible_host }} {{ hostvars[item].ssh_host_key.content | b64decode }}"
        state: present
      loop: "{{ groups.all }}"

- name: Install authorized ssh key for control node on all hosts
  ansible.posix.authorized_key:
    user: "{{ aap_install_user }}"
    state: present
    key: "{{ hostvars[orchestrator_node_host_vars.inventory_hostname].ssh_public_key.content | b64decode }}"
djdanielsson commented 1 month ago

We have decided that this is a prerequisite for this collection to work because you need to provide ssh keys already at this point you should have handled host checking in some way.