elastic / ansible-elasticsearch

Ansible playbook for Elasticsearch
Other
1.59k stars 857 forks source link

Deleting native user causes all but one node to fail #710

Closed lksnyder0 closed 4 years ago

lksnyder0 commented 4 years ago

Elasticsearch version: 7.8.1

Role version: 7.8.1

JVM version (java -version):

# /usr/share/elasticsearch/jdk/bin/java -version
openjdk version "14.0.1" 2020-04-14
OpenJDK Runtime Environment AdoptOpenJDK (build 14.0.1+7)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 14.0.1+7, mixed mode, sharing)

OS version (uname -a if on a Unix-like system): Ubuntu 18.04.3

Description of the problem including expected versus actual behaviour: When multiple nodes are configured and a user is deleted, all nodes but one will fail. I expected any CRUD operations on users and roles to only occur once.

Playbook:

---
- name: Deploy elasticsearch with testuser
  hosts: elastic_servers
  roles:
    - role: elastic.elasticsearch
  vars:
    es_version: 7.8.1
    es_xpack_trial: true
    initial_master_nodes:
      - node1
      - node2
    es_config:
      node.name: "{{ node_name | default(ansible_hostname) }}"
      cluster.name: "test123"
      discovery.seed_hosts: "xxx.xxx.xxx.xxx:9300, yyy.yyy.yyy.yyy:9300"
      cluster.initial_master_nodes:
        - "node1"
        - "node2"
      network.host: "0.0.0.0"
    es_heap_size: 1g
    es_api_basic_auth_username: elastic
    es_api_basic_auth_password: badpassword
    es_users:
      native:
        testuser:
          password: password123
          roles:
            - superuser
        testuser2:
          password: anotherpassword
          roles:
            - superuser

- name: Deploy elasticsearch without testuser
  hosts: elastic_servers
  roles:
    - role: elastic.elasticsearch
  vars:
    es_version: 7.8.1
    es_xpack_trial: true
    initial_master_nodes:
      - node1
      - node2
    es_config:
      node.name: "{{ node_name | default(ansible_hostname) }}"
      cluster.name: "test123"
      discovery.seed_hosts: "xxx.xxx.xxx.xxx:9300, yyy.yyy.yyy.yyy:9300"
      cluster.initial_master_nodes:
        - "node1"
        - "node2"
      network.host: "0.0.0.0"
    es_heap_size: 1g
    es_api_basic_auth_username: elastic
    es_api_basic_auth_password: badpassword
    es_users:
      native:
        testuser2:
          password: anotherpassword
          roles:
            - superuser

Inventory

[elastic_servers]
yyy.yyy.yyy.yyy node_name=node1 ansible_user=root
xxx.xxx.xxx.xxx node_name=node2 ansible_user=root

Provide logs from Ansible: https://pastebin.com/MXU8EVRF

ES Logs if relevant:

jmlrt commented 4 years ago

Hi @lksnyder0, thanks for submitting this issue 👍 .

jmlrt commented 4 years ago

Closed in #716