Closed tbauriedel closed 2 weeks ago
You can also use Flush Handlers
tasks:
- shell: some tasks go here
- meta: flush_handlers
Did you try to use flush_handlers in the playbook? Would this be a viable option to use?
Sorry for the late response.
Sure, flushing could "solve" this.
But in a deployment in which the roles are not called by includes within tasks, but only by roles: [...]
this will not work.
I could be wrong here, but from my point of view it should be possible to execute a single playbook without additional tasks to include roles or flush handlers.
- hosts: domain.net
become: yes
tasks:
- ansible.builtin.include_role:
name: int.ntp
apply:
tags: icinga
tags: always
- ansible.builtin.include_role:
name: int.env
- ansible.builtin.include_role:
name: int.apt
# - { role: int.certbot, tags: ['certbot'] }
- ansible.builtin.include_role:
name: geerlingguy.mysql
apply:
tags: mysql
tags: always
- ansible.builtin.include_role:
name: icinga.icinga.repos
apply:
tags: icinga
tags: always
- ansible.builtin.include_role:
name: icinga.icinga.icinga2
apply:
tags: icinga
tags: always
- ansible.builtin.include_role:
name: icinga.icinga.icingadb
apply:
tags: icinga
tags: always
- ansible.builtin.include_role:
name: icinga.icinga.icingadb_redis
apply:
tags: icinga
tags: always
- ansible.builtin.include_role:
name: icinga.icinga.monitoring_plugins
apply:
tags: icinga
tags: always
- meta: flush_handlers
tags: icinga
- ansible.builtin.include_role:
name: icinga.icinga.icingaweb2
apply:
tags: icinga,icingaweb2
tags: always
Results in the following error
TASK [icinga.icinga.icingaweb2 : Module Director | Run kickstart if required _raw_params=icingacli director kickstart run] ***********************************************************************************************************************************************************************************************
fatal: [domain.net]: FAILED! => {"changed": true, "cmd": "icingacli director kickstart run", "delta": "0:00:00.141587", "end": "2024-10-14 10:59:23.531171", "msg": "non-zero return code", "rc": 1, "start": "2024-10-14 10:59:23.389584", "stderr": "ERROR: RuntimeException in /usr/share/icingaweb2/modules/director/library/Director/Core/RestApiClient.php:149 with message: Unable to authenticate, please check your API credentials", "stderr_lines": ["ERROR: RuntimeException in /usr/share/icingaweb2/modules/director/library/Director/Core/RestApiClient.php:149 with message: Unable to authenticate, please check your API credentials"], "stdout": "", "stdout_lines": []}
I see the api credentials in /etc/icinga2/conf.d/api-users.conf
You are already using the flush_handlers but you need to use them after the icinga2 role.
As Icinga2 reads its new configuration, in this case the api-users before the reload and after the reload the configuration is active.
But I'll discuss this, I'm aware that the icinga2 core should be useable after the role is done.
I've placed the handler right after the icinga.icinga.icinga2
role now with the same result. But even running icingacli director kickstart run
I get the same error. So there must be an error somewhere else I guess.
This should be the default director port right?
tcp LISTEN 0 4096 *:5665 *:*`
api-users.conf by ansible
object ApiUser "root" {
password = "passapi"
permissions = [ "*", ]
}
/etc/icingaweb2/modules/director/kickstart.ini
[config]
endpoint = domain.net
host = 127.0.0.1
username = root
password = passapi
/etc/icingaweb2/modules/director/config.ini
[db]
resource = director_db
EDIT: running icinga2 api setup
in the CLI, restarting icinga2 service I am able to run icingacli director kickstart run
. Something is off with ansible
EDIT2: Isn't this crucial step missing as per You can run the CLI command icinga2 api setup to enable the api ?
I figured it out I guess. The node setup command adds the line include "conf.d/api-users.conf"
.
Due to icinga2_confd: false # Disable example configuration
the include is not being set include_recursive "conf.d"
as expected (templating) and the api config is not used at all. So during the templating for each active config used by features/objects this line should be added to icinga2.conf. Or the example config could be purged/moved.
But the file is not being used or referenced.
icinga2_objects:
domain.net:
- name: root
type: ApiUser
file: conf.d/api-users.conf
EDIT: Or what is the proper way to handle icinga2_confd: false
? Which variable should be set instead to make use of file: conf.d/api-users.conf
I've moved two tasks to handlers/
. Handlers should be run in the same order they are notified in. Meaning role 'icinga2' -> "notify restart icinga2" -> role 'icingaweb2' -> "notify director migration/kickstart" -> "actual restart icinga2" -> "actual director migration/kickstart"
.
Does that fix the problem for you @tbauriedel?
@Donien that should work, thank you!
The kickstart of the icinga-director cannot be executed in a single fullstack deployment.
The required state of Icinga 2 for the kickstart is created by the restart handler. The handler is executed at the end of all roles. If you want to install and kickstart the icinga-director in order to have a complete setup, this will fail because the Icinga 2 handler was logically not yet running at this point.
There are workarounds:
run_kickstart: false
and kickstart manuallyicinga.icinga.icingaweb2
only, that is started afterwards.In my opinion a fullstack deployment (including icinga-director kickstart) should work. It could be a idea to flush the handlers after the icinga2 role.
The best-practice from Ansible's point of view with handlers unfortunately collides with the functionality of the icinga components.