Closed KlettIT closed 1 month ago
Moin!
Thanks for your issue!
I'd love to take a look at it.
I need a little more information:
Hi,
Role Version: bodsch.mariadb (2.4.1)
OS: Ansible (Semaphore): alpine 3.20 / Target Host: Ubuntu 24.04
Playbook:
- hosts: device_roles_mariadb
name: Install and configure mariadb galera cluster node
tasks:
- name: Install mariadb custom repo
block:
- ansible.builtin.apt_repository:
repo: "deb [signed-by=/usr/share/keyrings/pulp-k-ops.asc] https://pulp.services.k-ops.io/pulp/content/mariadb/{{ mariadb.version }}/ubuntu {{ ansible_distribution_release }} main"
state: present
filename: kit_mariadb
update_cache: false
- ansible.builtin.apt_repository:
repo: "deb [signed-by=/usr/share/keyrings/pulp-k-ops.asc] https://pulp.services.k-ops.io/pulp/content/mariadb/tools/ubuntu {{ ansible_distribution_release }} main"
state: present
filename: kit_mariadb
- name: Install required python modules
ansible.builtin.package:
name: python3-pymysql
state: present
- name: Install required python modules
ansible.builtin.package:
name: mariadb-backup
state: present
- name: "Install mariadb"
ansible.builtin.include_role:
name: bodsch.mariadb
vars:
mariadb_python_packages: "" # workaround
mariadb_version: "{{ mariadb.version }}"
mariadb_use_external_repo: false
mariadb_monitoring:
enabled: false
mariadb_mysqltuner: false
mariadb_root_username: "{{ mariadb.root_username }}"
mariadb_root_password: "{{ mariadb.root_password }}"
mariadb_root_password_update: false
mariadb_user_password_update: false
mariadb_config_mysqld:
# basic
user: mysql
pid_file: "{{ mariadb_pid_file }}"
socket: "{{ mariadb_socket }}"
datadir: /var/lib/mysql
tmpdir: /tmp
lc_messages_dir: /usr/share/mysql
skip-external-locking:
bind_address: 127.0.0.1
lower_case_table_names: "1"
event_scheduler: "ON"
# Query Cache
query_cache_type: "0"
query_cache_limit: 3M
query_cache_size: 16M
tmp_table_size: "1024"
max_heap_table_size: 64M
join_buffer_size: 262144
# Logging
log_error: /var/log/mysql/error.log
server_id: "{{ mariadb_server_id }}"
relay_log: "{{ ansible_hostname }}-relay-bin"
relay_log_index: "{{ ansible_hostname }}-relay-bin.index"
# required for Wsrep GTID Mode, needs to be same path on all nodes
log_bin: "{{ mariadb.galera.cluster_name }}-log-bin"
log_bin_index: "{{ mariadb.galera.cluster_name }}-log-bin.index"
log_bin_trust_function_creators: "1"
expire_logs_days: 10
max_relay_log_size: 100M
max_binlog_size: 100M
binlog_ignore_db: monitoring
# Character sets
character_set_server: utf8mb4
collation_server: utf8mb4_general_ci
# required for Wsrep GTID Mode
log_slave_updates: true
# timeouts
wait_timeout: 28800
interactive_timeout: 28800
mariadb_config_custom:
mariadb:
proxy_protocol_networks: "{{ mariadb.proxy_protocol_networks | join(',') }}"
mariadb_server_id: "{{ mariadb.galera.node.id }}"
mariadb_config_galera:
# Mandatory settings
wsrep_on: "ON"
wsrep_cluster_name: "{{ mariadb.galera.cluster_name }}"
wsrep_provider: "/usr/lib/libgalera_smm.so"
wsrep_cluster_address: "gcomm://{{mariadb.galera.node_addresses | join(',') }}"
binlog_format: "row"
default_storage_engine: "InnoDB"
innodb_autoinc_lock_mode: "2"
bind-address: "0.0.0.0"
# Galera Sync Mode
wsrep_sst_method: "{{ mariadb.galera.sst.method }}"
wsrep_sst_auth: "{{ mariadb.galera.sst.auth.username }}:{{ mariadb.galera.sst.auth.password }}"
# galera node settings
wsrep_node_address: "{{ mariadb.galera.node.address }}"
wsrep_node_name: "{{ mariadb.galera.node.name }}"
# GTID Mode
wsrep_gtid_mode: "ON"
wsrep_gtid_domain_id: "{{ mariadb.galera.gtid_domain_id }}"
gtid_domain_id: "{{ mariadb.galera.node.id }}"
gtid_strict_mode: "1"
# Tuning:
wsrep_slave_threads: 4
innodb_flush_log_at_trx_commit: "0"
innodb_doublewrite: "1"
innodb_buffer_pool_size: "1G"
Thank you! I'll extend the tests to Ubuntu 24.04 (which is not yet supported) and try to recreate that. Until 22.04, setting the password was not a problem.
I have a problem. :(
So far I could not find a version 10.4 for Ubuntu 24.04 in any mirror. My tests are therefore running against a Debian 10.
It doesen't have to be 10.4
Just 10.4
or higher. 10.11
(lts release) schould be availible.
I cannot reproduce the problem. :/
I have now written a special test for MariaDB 10.4 (and had to deal with AppArmor again). I was able to install Mariadb 10.4 (or 10.5) without any problems. Even with an empty root password.
Could you please give me the corresponding logfiles from the target system?
This is logged in the mysql.log:
2024-07-22 10:05:42 59769 [Warning] WSREP: Ignoring error 'SET PASSWORD is ignored for users authenticating via unix_socket plugin' on query. Default database: ''. Query: 'set password='REDACTED',sql_log_off=0', Error_code: 1699
I think this has nothing to do with a single MariaDB installation. It could be related to the fact that a cluster is to be set up here. I hope I am interpreting the WSREP
correctly ...
I would have to build a different test for this.
yes it is a galera installation. thought the 'set password' mechanism works the same on single and cluster installations.
I thought the ‘set password’ mechanism worked the same for single and cluster installations.
I only have a test for a primary/replication installation so far. I have had no need for Galera so far.
But now I'm interested. I will create a corresponding test. But I will need your help: I would need a complete configuration of all instances, if you can (and are allowed to), I would be happy if you would support me.
Sure i will try to help as good as i can. Can you explain in more detail what you mean by “complete configuration”?
I thought the ‘set password’ mechanism worked the same for single and cluster installations.
I only have a test for a primary/replication installation so far. I had no need for Galera so far. But now I'm interested. I will create a corresponding test. But I will need your help: I would need a complete configuration of all instances, if you can (and are allowed to), I would be happy if you would support me.
Sure i will try to help as good as i can. Can you explain in more detail what you mean by “complete configuration”?
A cluster usually consists of several nodes.
I would therefore need the configuration of all nodes.
Completely, if possible. This can also be anonymised, i.e. fake network configuration. I have created a molecule configuration with a primary and 2 replicas.
In the first (local) test, everything went smoothly and the first database on the primary was successfully replicated in the cluster. In the GitHub workflows, things are not yet running smoothly.
# Ansible managed
# The MariaDB configuration file
#
# The MariaDB/MySQL tools read configuration files in the following order:
# 1. "/etc/mysql/my.cnf" (this file) to set global defaults,
# 4. "~/.my.cnf" to set user-specific options.
#
# If the same option is defined multiple times, the last one will apply.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# This group is read both by the client and the server
# use it for options that affect everything
#
[client-server]
# Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d
# The MariaDB configuration file
#
# The MariaDB/MySQL tools read configuration files in the following order:
# 0. "/etc/mysql/my.cnf" symlinks to this file, reason why all the rest is read.
# 1. "/etc/mysql/mariadb.cnf" (this file) to set global defaults,
# 2. "/etc/mysql/conf.d/*.cnf" to set global options.
# 3. "/etc/mysql/mariadb.conf.d/*.cnf" to set MariaDB-only options.
# 4. "~/.my.cnf" to set user-specific options.
#
# If the same option is defined multiple times, the last one will apply.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# If you are new to MariaDB, check out https://mariadb.com/kb/en/basic-mariadb-articles/
#
# This group is read both by the client and the server
# use it for options that affect everything
#
[client-server]
# Port or socket location where to connect
# port = 3306
socket = /run/mysqld/mysqld.sock
# Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/
# THIS FILE IS OBSOLETE. STOP USING IT IF POSSIBLE.
# This file exists only for backwards compatibility for
# tools that run '--defaults-file=/etc/mysql/debian.cnf'
# and have root level access to the local filesystem.
# With those permissions one can run 'mariadb' directly
# anyway thanks to unix socket authentication and hence
# this file is useless. See package README for more info.
[client]
host = localhost
user = root
[mysql_upgrade]
host = localhost
user = root
# THIS FILE WILL BE REMOVED IN A FUTURE DEBIAN RELEASE.
Despite of hostname variables it is the same on all 3 nodes.
# Ansible managed
[server]
[client]
default-character-set = utf8mb4
socket = /run/mysqld/mysqld.sock
[mysql]
default-character-set = utf8mb4
[mysqld]
user = mysql
pid-file = /run/mysqld/mysqld.pid
socket = /run/mysqld/mysqld.sock
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address = 127.0.0.1
lower-case-table-names = 1
event-scheduler = ON
query-cache-type = 0
query-cache-limit = 3M
query-cache-size = 16M
tmp-table-size = 1024
max-heap-table-size = 64M
join-buffer-size = 262144
log-error = /var/log/mysql/error.log
server-id = 1
relay-log = SIT-SQLP01-relay-bin
relay-log-index = SIT-SQLP01-relay-bin.index
log-bin = galera_cluster-bin
log-bin-index = galera_cluster.index
log-bin-trust-function-creators = 1
expire-logs-days = 10
max-relay-log-size = 100M
max-binlog-size = 100M
binlog-ignore-db = monitoring
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
log-slave-updates = True
wait-timeout = 28800
interactive-timeout = 28800
[mysqld_safe]
socket = /run/mysqld/mysqld.sock
nice = 0
skip-log-error
syslog
[mysqldump]
quick
quote-names
max-allowed-packet = 16M
[galera]
wsrep-on = ON
wsrep-cluster-name = galera_cluster.local
wsrep-provider = /usr/lib/libgalera_smm.so
wsrep-cluster-address = gcomm://10.195.0.41,10.195.0.42,10.195.0.43
binlog-format = row
default-storage-engine = InnoDB
innodb-autoinc-lock-mode = 2
bind-address = 0.0.0.0
wsrep-sst-method = mariabackup
wsrep-sst-auth = sst_xtrabackup:REDACTED
wsrep-node-address = 10.195.0.41
wsrep-node-name = SIT-SQLP01
wsrep-gtid-mode = ON
wsrep-gtid-domain-id = 1337
gtid-domain-id = 1
gtid-strict-mode = 1
wsrep-slave-threads = 4
innodb-flush-log-at-trx-commit = 0
innodb-doublewrite = 1
innodb-buffer-pool-size = 1G
[embedded]
# custom configurations
[mariadb]
proxy-protocol-networks = 10.195.0.37,10.195.0.38,10.195.0.40,localhost
# It's not recommended to modify this file in-place, because it will be
# overwritten during package upgrades. If you want to customize, the
# best way is to create a file "/etc/systemd/system/mariadb.service",
# containing
# .include /usr/lib/systemd/system/mariadb.service
# ...make your changes here...
# or create a file "/etc/systemd/system/mariadb.service.d/foo.conf",
# which doesn't need to include ".include" call and which will be parsed
# after the file mariadb.service itself is parsed.
#
# For more info about custom unit files, see systemd.unit(5) or
# https://mariadb.com/kb/en/mariadb/systemd/
#
# Copyright notice:
#
# This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=MariaDB 10.11.8 database server
Documentation=man:mariadbd(8)
Documentation=https://mariadb.com/kb/en/library/systemd/
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
##############################################################################
## Core requirements
##
Type=notify
# Setting this to true can break replication and the Type=notify settings
# See also bind-address mariadbd option.
PrivateNetwork=false
##############################################################################
## Package maintainers
##
User=mysql
Group=mysql
# CAP_IPC_LOCK To allow memlock to be used as non-root user
# CAP_DAC_OVERRIDE To allow auth_pam_tool (which is SUID root) to read /etc/shadow when it's chmod 0
# does nothing for non-root, not needed if /etc/shadow is u+r
# CAP_AUDIT_WRITE auth_pam_tool needs it on Debian for whatever reason
AmbientCapabilities=CAP_IPC_LOCK CAP_DAC_OVERRIDE CAP_AUDIT_WRITE
# PrivateDevices=true implies NoNewPrivileges=true and
# SUID auth_pam_tool suddenly doesn't do setuid anymore
PrivateDevices=false
# Prevent writes to /usr, /boot, and /etc
ProtectSystem=full
# Doesn't yet work properly with SELinux enabled
# NoNewPrivileges=true
# Prevent accessing /home, /root and /run/user
ProtectHome=true
# Execute pre and post scripts as root, otherwise it does it as User=
PermissionsStartOnly=true
ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld
# Perform automatic wsrep recovery. When server is started without wsrep,
# galera_recovery simply returns an empty string. In any case, however,
# the script is not expected to return with a non-zero status.
# It is always safe to unset _WSREP_START_POSITION environment variable.
# Do not panic if galera_recovery script is not available. (MDEV-10538)
ExecStartPre=/bin/sh -c "systemctl unset-environment _WSREP_START_POSITION"
ExecStartPre=/bin/sh -c "[ ! -e /usr/bin/galera_recovery ] && VAR= || \
VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ] \
&& systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1"
# Needed to create system tables etc.
# ExecStartPre=/usr/bin/mysql_install_db -u mysql
# Start main service
# MYSQLD_OPTS here is for users to set in /etc/systemd/system/mariadb.service.d/MY_SPECIAL.conf
# Use the [Service] section and Environment="MYSQLD_OPTS=...".
# This isn't a replacement for my.cnf.
# _WSREP_NEW_CLUSTER is for the exclusive use of the script galera_new_cluster
ExecStart=/usr/sbin/mariadbd $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION
# Unset _WSREP_START_POSITION environment variable.
ExecStartPost=/bin/sh -c "systemctl unset-environment _WSREP_START_POSITION"
ExecStartPost=/etc/mysql/debian-start
KillSignal=SIGTERM
# Don't want to see an automated SIGKILL ever
SendSIGKILL=no
# Restart crashed server only, on-failure would also restart, for example, when
# my.cnf contains unknown option
Restart=on-abort
RestartSec=5s
UMask=007
##############################################################################
## USERs can override
##
##
## by creating a file in /etc/systemd/system/mariadb.service.d/MY_SPECIAL.conf
## and adding/setting the following under [Service] will override this file's
## settings.
# Useful options not previously available in [mysqld_safe]
# Kernels like killing mariadbd when out of memory because its big.
# Lets temper that preference a little.
# OOMScoreAdjust=-600
# Explicitly start with high IO priority
# BlockIOWeight=1000
# If you don't use the /tmp directory for SELECT ... OUTFILE and
# LOAD DATA INFILE you can enable PrivateTmp=true for a little more security.
PrivateTmp=false
# Set an explicit Start and Stop timeout of 900 seconds (15 minutes!)
# this is the same value as used in SysV init scripts in the past
# Galera might need a longer timeout, check the KB if you want to change this:
# https://mariadb.com/kb/en/library/systemd/#configuring-the-systemd-service-timeout
TimeoutStartSec=900
TimeoutStopSec=900
# Set the maximium number of tasks (threads) to 99% of what the system can
# handle as set by the kernel, reserve the 1% for a remote ssh connection,
# some monitoring, or that backup cron job. Without the directive this would
# be 15% (see DefaultTasksMax in systemd man pages).
TasksMax=99%
##
## Options previously available to be set via [mysqld_safe]
## that now needs to be set by systemd config files as mysqld_safe
## isn't executed.
##
# Number of files limit. previously [mysqld_safe] open-files-limit
LimitNOFILE=32768
# For liburing and io_uring_setup()
LimitMEMLOCK=524288
# Maximium core size. previously [mysqld_safe] core-file-size
# LimitCore=
# Nice priority. previously [mysqld_safe] nice
# Nice=-5
# Timezone. previously [mysqld_safe] timezone
# Environment="TZ=UTC"
# Library substitutions. previously [mysqld_safe] malloc-lib with explicit paths
# (in LD_LIBRARY_PATH) and library name (in LD_PRELOAD).
# Environment="LD_LIBRARY_PATH=/path1 /path2" "LD_PRELOAD=
# Flush caches. previously [mysqld_safe] flush-caches=1
# ExecStartPre=sync
# ExecStartPre=sysctl -q -w vm.drop_caches=3
# numa-interleave=1 equalivant
# Change ExecStart=numactl --interleave=all /usr/sbin/mariadbd......
# crash-script equalivent
# FailureAction=
Great! Thank you. I'll get round to it as soon as I get back from Amphi. 😉
A clean Galera cluster is tougher than expected. Especially if you want to restart the nodes cleanly. But I seem to have found a solution.
Hi @KlettIT !
I have created a new release and written corresponding tests.
Please have a look at the documentation on mariadb_galera
and test this release.
Hi @bodsch,
thanks for the new release.
9:34:15 AM
Task 695 added to queue
9:34:20 AM
Started: 695
9:34:20 AM
Run TaskRunner with template: [Config] MariaDB Galera
9:34:20 AM
Preparing: 695
9:34:20 AM
Cloning Repository https://git.services.k-ops.io/KIT/ansible
9:34:20 AM
Cloning into 'repository_19_24'...
9:34:26 AM
installing static inventory
9:34:27 AM
Starting galaxy collection install process
9:34:27 AM
Process install dependency map
9:34:32 AM
Starting collection install process
9:34:32 AM
Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/bodsch-core-2.2.1.tar.gz to /tmp/semaphore/.ansible/tmp/ansible-local-70666h0zyr3k/tmpd1ocl00n/bodsch-core-2.2.1-14lb9p1i
9:34:34 AM
Installing 'bodsch.core:2.2.1' to '/tmp/semaphore/.ansible/collections/ansible_collections/bodsch/core'
9:34:34 AM
bodsch.core:2.2.1 was installed successfully
9:34:34 AM
Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/netbox-netbox-3.19.1.tar.gz to /tmp/semaphore/.ansible/tmp/ansible-local-70666h0zyr3k/tmpd1ocl00n/netbox-netbox-3.19.1-e9m2prgn
9:34:36 AM
Installing 'netbox.netbox:3.19.1' to '/tmp/semaphore/.ansible/collections/ansible_collections/netbox/netbox'
9:34:37 AM
netbox.netbox:3.19.1 was installed successfully
9:34:37 AM
Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/community-mysql-3.9.0.tar.gz to /tmp/semaphore/.ansible/tmp/ansible-local-70666h0zyr3k/tmpd1ocl00n/community-mysql-3.9.0-bzpjlooq
9:34:38 AM
Installing 'community.mysql:3.9.0' to '/tmp/semaphore/.ansible/collections/ansible_collections/community/mysql'
9:34:38 AM
community.mysql:3.9.0 was installed successfully
9:34:38 AM
collection/requirements.yml has no changes. Skip galaxy install process.
9:34:39 AM
Starting galaxy role install process
9:34:39 AM
- downloading role 'mariadb', owned by bodsch
9:34:40 AM
- downloading role from https://github.com/bodsch/ansible-mariadb/archive/2.5.0.tar.gz
9:34:40 AM
- extracting bodsch.mariadb to /tmp/semaphore/.ansible/roles/bodsch.mariadb
9:34:41 AM
- bodsch.mariadb (2.5.0) was installed successfully
9:34:41 AM
role/requirements.yml has no changes. Skip galaxy install process.
9:34:47 AM
[WARNING]: Invalid characters were found in group names but not replaced, use
9:34:47 AM
-vvvv to see details
9:34:47 AM
9:34:47 AM
PLAY [Install and configure mariadb galera cluster node] ***********************
9:34:47 AM
9:34:47 AM
TASK [Gathering Facts] *********************************************************
9:34:50 AM
ok: [SIT-SQLP03]
9:34:50 AM
9:34:50 AM
TASK [ansible.builtin.apt_repository] ******************************************
9:34:51 AM
ok: [SIT-SQLP03]
9:34:51 AM
9:34:51 AM
TASK [ansible.builtin.apt_repository] ******************************************
9:34:52 AM
ok: [SIT-SQLP03]
9:34:52 AM
9:34:52 AM
TASK [Install required python modules] *****************************************
9:34:53 AM
ok: [SIT-SQLP03]
9:34:53 AM
9:34:53 AM
TASK [Install required python modules] *****************************************
9:34:54 AM
ok: [SIT-SQLP03]
9:34:54 AM
9:34:54 AM
TASK [Install mariadb] *********************************************************
9:34:54 AM
9:34:54 AM
TASK [bodsch.mariadb : prepare] ************************************************
9:34:54 AM
included: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml for SIT-SQLP03
9:34:54 AM
9:34:54 AM
TASK [bodsch.mariadb : include OS specific configuration (Ubuntu (Debian) 24)] ***
9:34:54 AM
ok: [SIT-SQLP03]
9:34:54 AM
9:34:54 AM
TASK [bodsch.mariadb : validate mariadb users] *********************************
9:34:54 AM
skipping: [SIT-SQLP03]
9:34:54 AM
9:34:54 AM
TASK [bodsch.mariadb : install dependecies] ************************************
9:34:55 AM
TASK [bodsch.mariadb : repositories] *******************************************
9:34:55 AM
included: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/repositories.yml for SIT-SQLP03
9:34:55 AM
9:34:55 AM
TASK [bodsch.mariadb : add apt signing key] ************************************
9:34:55 AM
skipping: [SIT-SQLP03]
9:34:55 AM
9:34:55 AM
TASK [bodsch.mariadb : install mariadb repositories for debian based] **********
9:34:55 AM
skipping: [SIT-SQLP03]
9:34:55 AM
9:34:55 AM
TASK [bodsch.mariadb : clean apt cache] ****************************************
9:34:55 AM
skipping: [SIT-SQLP03]
9:34:55 AM
9:34:55 AM
TASK [bodsch.mariadb : clean apt cache] ****************************************
9:34:55 AM
skipping: [SIT-SQLP03]
9:34:55 AM
9:34:55 AM
TASK [bodsch.mariadb : remove external repository] *****************************
9:34:55 AM
ok: [SIT-SQLP03]
9:34:55 AM
9:34:56 AM
ok: [SIT-SQLP03]
9:34:56 AM
9:34:56 AM
TASK [bodsch.mariadb : update package cache] ***********************************
9:34:56 AM
skipping: [SIT-SQLP03]
9:34:56 AM
9:34:56 AM
TASK [bodsch.mariadb : update facts to get latest information] *****************
9:34:57 AM
ok: [SIT-SQLP03]
9:34:57 AM
9:34:57 AM
TASK [bodsch.mariadb : merge mariadb configuration segment for server between defaults and custom] ***
9:34:57 AM
ok: [SIT-SQLP03]
9:34:57 AM
9:34:57 AM
TASK [bodsch.mariadb : detect if mariadb installed] ****************************
9:34:58 AM
ok: [SIT-SQLP03]
9:34:58 AM
9:34:58 AM
TASK [bodsch.mariadb : detect if mariadb installed] ****************************
9:34:58 AM
ok: [SIT-SQLP03]
9:34:58 AM
9:34:58 AM
TASK [bodsch.mariadb : detect galera cluster] **********************************
9:34:58 AM
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'NoneType' object has no attribute 'get'
9:34:58 AM
fatal: [SIT-SQLP03]: FAILED! => changed=false
9:34:58 AM
9:34:58 AM
PLAY RECAP *********************************************************************
9:34:58 AM
SIT-SQLP03 : ok=14 changed=0 unreachable=0 failed=1 skipped=6 rescued=0 ignored=0
9:34:58 AM
9:34:58 AM
Running playbook failed: exit status 2
Relevant block with -vvvv
:
9:37:46 AM
TASK [bodsch.mariadb : detect if mariadb installed] ****************************
9:37:46 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:74
9:37:46 AM
ok: [SIT-SQLP03] => changed=false
9:37:46 AM
ansible_facts:
9:37:46 AM
mariadb_installed: true
9:37:46 AM
9:37:46 AM
TASK [bodsch.mariadb : detect galera cluster] **********************************
9:37:46 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:81
9:37:46 AM
Trying secret for vault_id=default
9:37:46 AM
detect_galera({'wsrep_on': 'ON', 'wsrep_cluster_name': 'inst01.sql.services.k-sys.io', 'wsrep_provider': '/usr/lib/libgalera_smm.so', 'wsrep_cluster_address': 'gcomm://10.195.0.41,10.195.0.42,10.195.0.43', 'binlog_format': 'row', 'default_storage_engine': 'InnoDB', 'innodb_autoinc_lock_mode': '2', 'bind-address': '0.0.0.0', 'wsrep_sst_method': 'mariabackup', 'wsrep_sst_auth': 'sst_xtrabackup:Eiveiy3a', 'wsrep_node_address': '10.195.0.43', 'wsrep_node_name': 'SIT-SQLP03', 'wsrep_gtid_mode': 'ON', 'wsrep_gtid_domain_id': '1337', 'gtid_domain_id': '3', 'gtid_strict_mode': '1', 'wsrep_slave_threads': 4, 'innodb_flush_log_at_trx_commit': '0', 'innodb_doublewrite': '1', 'innodb_buffer_pool_size': '1G'}, hostvars)
9:37:46 AM
The full traceback is:
9:37:46 AM
Traceback (most recent call last):
9:37:46 AM
File "/opt/semaphore/apps/ansible/9.4.0/venv/lib/python3.11/site-packages/ansible/executor/task_executor.py", line 526, in _execute
9:37:46 AM
self._task.post_validate(templar=templar)
9:37:46 AM
File "/opt/semaphore/apps/ansible/9.4.0/venv/lib/python3.11/site-packages/ansible/playbook/task.py", line 291, in post_validate
9:37:46 AM
super(Task, self).post_validate(templar)
9:37:46 AM
File "/opt/semaphore/apps/ansible/9.4.0/venv/lib/python3.11/site-packages/ansible/playbook/base.py", line 543, in post_validate
9:37:46 AM
value = method(attribute, getattr(self, name), templar)
9:37:46 AM
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9:37:46 AM
File "/opt/semaphore/apps/ansible/9.4.0/venv/lib/python3.11/site-packages/ansible/playbook/task.py", line 299, in _post_validate_args
9:37:46 AM
args = templar.template(value)
9:37:46 AM
^^^^^^^^^^^^^^^^^^^^^^^
9:37:46 AM
File "/opt/semaphore/apps/ansible/9.4.0/venv/lib/python3.11/site-packages/ansible/template/__init__.py", line 791, in template
9:37:46 AM
d[k] = self.template(
9:37:46 AM
^^^^^^^^^^^^^^
9:37:46 AM
File "/opt/semaphore/apps/ansible/9.4.0/venv/lib/python3.11/site-packages/ansible/template/__init__.py", line 764, in template
9:37:46 AM
result = self.do_template(
9:37:46 AM
^^^^^^^^^^^^^^^^^
9:37:47 AM
Running playbook failed: exit status 2
Is a python package missing?
This looks more like a template error.
Would you like to compare your configuration with the one in the Molecule Test?
ok, took a way too long to get that you moved many options to mariadb_galera
....However after adjusting my vars, i came a bit futher:
...
mariadb_galera:
cluster_name: "{{ mariadb.galera.cluster_name }}"
node:
name: "{{ mariadb.galera.node.name }}"
id: "{{ mariadb.galera.node.id }}"
address: "{{ mariadb.galera.node.address }}"
gtid_domain_id: "{{ mariadb.galera.node.id }}"
node_addresses: "{{ mariadb.galera.node_addresses }}"
sst:
method: "{{ mariadb.galera.sst.method }}"
auth:
username: "{{ mariadb.galera.sst.auth.username }}"
password: "{{ mariadb.galera.sst.auth.password }}"
mariadb_config_galera:
# Mandatory settings
wsrep_on: "ON"
wsrep_provider: "/usr/lib/libgalera_smm.so"
binlog_format: "row"
default_storage_engine: "InnoDB"
innodb_autoinc_lock_mode: "2"
bind-address: "0.0.0.0"
# GTID Mode
wsrep_gtid_mode: "ON"
wsrep_gtid_domain_id: "{{ mariadb.galera.gtid_domain_id }}"
gtid_strict_mode: "1"
# Tuning:
wsrep_slave_threads: 4
innodb_flush_log_at_trx_commit: "0"
innodb_doublewrite: "1"
innodb_buffer_pool_size: "1G"
...
9:17:27 AM
File "/tmp/semaphore/.ansible/roles/bodsch.mariadb/filter_plugins/mariadb.py", line 80, in detect_galera
9:17:27 AM
node_information = {x: v.get("ansible_default_ipv4", None).get("address", None) for x, v in hostvars.items() if v.get("ansible_default_ipv4", None).get("address", None) }
9:17:27 AM
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9:17:27 AM
File "/tmp/semaphore/.ansible/roles/bodsch.mariadb/filter_plugins/mariadb.py", line 80, in
9:17:27 AM
node_information = {x: v.get("ansible_default_ipv4", None).get("address", None) for x, v in hostvars.items() if v.get("ansible_default_ipv4", None).get("address", None) }
9:17:27 AM
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9:17:27 AM
AttributeError: 'NoneType' object has no attribute 'get'
9:17:27 AM
fatal: [SIT-SQLP03]: FAILED! => changed=false
seems like hostvars
is not getting expanded?
Then the Molecule Test should also fail.
No, that must be a different error ...
Good morning @KlettIT ! I have created a new branch so that I can understand this better: tests/galera-filter It only contains debug output in the filter. Please use this branch!
Start the call of the ansible-playbook with -vv
and send me the complete output of the task "detect galera cluster"
!
My output with the molecule test looks like this:
TASK [ansible-mariadb : detect galera cluster] *********************************
task path: /src/ansible/ansible-mariadb/tasks/prepare.yml:81
Montag 12 August 2024 07:18:56 +0200 (0:00:00.045) 0:00:07.461 *********
detect_galera({'wsrep_on': 'ON', 'wsrep_notify_cmd': '/bin/wsrep_notify.sh', 'wsrep_cluster_name': 'molecule-cluster', 'wsrep_provider': '/usr/lib/libgalera_smm.so', 'wsrep_cluster_address': 'gcomm://10.29.0.10,10.29.0.21,10.29.0.22', 'binlog_format': 'row', 'default_storage_engine': 'InnoDB', 'innodb_autoinc_lock_mode': '2', 'bind-address': '10.29.0.10', 'wsrep_sst_method': 'rsync', 'wsrep_sst_auth': 'cluster-admin:c1ust3R', 'wsrep_node_address': '10.29.0.10', 'wsrep_node_name': 'primary', 'wsrep_gtid_mode': 'ON', 'wsrep_gtid_domain_id': '1337', 'gtid_domain_id': '1', 'gtid_strict_mode': '1', 'wsrep_slave_threads': 8, 'wsrep_log_conflicts': None, 'innodb_flush_log_at_trx_commit': '0', 'innodb_doublewrite': '1', 'innodb_buffer_pool_size': '512M'}, hostvars)
detect_galera({'wsrep_on': 'ON', 'wsrep_notify_cmd': '/bin/wsrep_notify.sh', 'wsrep_cluster_name': 'molecule-cluster', 'wsrep_provider': '/usr/lib/libgalera_smm.so', 'wsrep_cluster_address': 'gcomm://10.29.0.10,10.29.0.21,10.29.0.22', 'binlog_format': 'row', 'default_storage_engine': 'InnoDB', 'innodb_autoinc_lock_mode': '2', 'bind-address': '10.29.0.21', 'wsrep_sst_method': 'rsync', 'wsrep_sst_auth': 'cluster-admin:c1ust3R', 'wsrep_node_address': '10.29.0.21', 'wsrep_node_name': 'replica1', 'wsrep_gtid_mode': 'ON', 'wsrep_gtid_domain_id': '1337', 'gtid_domain_id': '1', 'gtid_strict_mode': '1', 'wsrep_slave_threads': 8, 'wsrep_log_conflicts': None, 'innodb_flush_log_at_trx_commit': '0', 'innodb_doublewrite': '1', 'innodb_buffer_pool_size': '512M'}, hostvars)
- primary
facts default_ipv4 : {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.10', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:0a', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
facts default_ipv6 : {}
ansible default_ipv4: {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.10', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:0a', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
ansible default_ipv6: {}
detect_galera({'wsrep_on': 'ON', 'wsrep_notify_cmd': '/bin/wsrep_notify.sh', 'wsrep_cluster_name': 'molecule-cluster', 'wsrep_provider': '/usr/lib/libgalera_smm.so', 'wsrep_cluster_address': 'gcomm://10.29.0.10,10.29.0.21,10.29.0.22', 'binlog_format': 'row', 'default_storage_engine': 'InnoDB', 'innodb_autoinc_lock_mode': '2', 'bind-address': '10.29.0.22', 'wsrep_sst_method': 'rsync', 'wsrep_sst_auth': 'cluster-admin:c1ust3R', 'wsrep_node_address': '10.29.0.22', 'wsrep_node_name': 'replica2', 'wsrep_gtid_mode': 'ON', 'wsrep_gtid_domain_id': '1337', 'gtid_domain_id': '1', 'gtid_strict_mode': '1', 'wsrep_slave_threads': 8, 'wsrep_log_conflicts': None, 'innodb_flush_log_at_trx_commit': '0', 'innodb_doublewrite': '1', 'innodb_buffer_pool_size': '512M'}, hostvars)
- primary
facts default_ipv4 : {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.10', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:0a', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
facts default_ipv6 : {}
ansible default_ipv4: {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.10', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:0a', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
ansible default_ipv6: {}
- replica_1
facts default_ipv4 : {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.21', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:15', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
facts default_ipv6 : {}
ansible default_ipv4: {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.21', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:15', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
ansible default_ipv6: {}
- primary
facts default_ipv4 : {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.10', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:0a', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
facts default_ipv6 : {}
ansible default_ipv4: {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.10', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:0a', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
ansible default_ipv6: {}
- replica_1
facts default_ipv4 : {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.21', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:15', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
facts default_ipv6 : {}
ansible default_ipv4: {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.21', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:15', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
ansible default_ipv6: {}
- replica_2
facts default_ipv4 : {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.22', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:16', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
facts default_ipv6 : {}
ansible default_ipv4: {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.22', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:16', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
ansible default_ipv6: {}
- cluster_members: '['10.29.0.10', '10.29.0.21', '10.29.0.22']' : 3
- replica_1
facts default_ipv4 : {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.21', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:15', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
facts default_ipv6 : {}
ansible default_ipv4: {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.21', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:15', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
ansible default_ipv6: {}
node_information: '{'primary': '10.29.0.10', 'replica_1': '10.29.0.21', 'replica_2': '10.29.0.22'}'
- replica_2
facts default_ipv4 : {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.22', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:16', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
facts default_ipv6 : {}
ansible default_ipv4: {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.22', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:16', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
ansible default_ipv6: {}
- cluster_members: '['10.29.0.10', '10.29.0.21', '10.29.0.22']' : 3
node_information: '{'primary': '10.29.0.10', 'replica_1': '10.29.0.21', 'replica_2': '10.29.0.22'}'
- replica_2
facts default_ipv4 : {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.22', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:16', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
facts default_ipv6 : {}
ansible default_ipv4: {'gateway': '10.29.0.254', 'interface': 'eth0', 'address': '10.29.0.22', 'broadcast': '10.29.0.255', 'netmask': '255.255.255.0', 'network': '10.29.0.0', 'prefix': '24', 'macaddress': '02:42:0a:1d:00:16', 'mtu': 1500, 'type': 'ether', 'alias': 'eth0'}
ansible default_ipv6: {}
- cluster_members: '['10.29.0.10', '10.29.0.21', '10.29.0.22']' : 3
node_information: '{'primary': '10.29.0.10', 'replica_1': '10.29.0.21', 'replica_2': '10.29.0.22'}'
@KlettIT ping
Sorry I was on vacation until today and therefore could not take care of it yet. Thanks already, but I'll have a look at it right away.
okay, this branch behaves differrent.
9:29:35 AM
Starting galaxy role install process
9:29:35 AM
- changing role bodsch.mariadb from 2.5.0 to tests/galera-filter
9:29:35 AM
- extracting bodsch.mariadb to /tmp/semaphore/.ansible/roles/bodsch.mariadb
9:29:36 AM
- bodsch.mariadb (tests/galera-filter) was installed successfully
9:29:36 AM
role/requirements.yml has no changes. Skip galaxy install process.
....snip....
9:29:55 AM
TASK [bodsch.mariadb : detect if mariadb installed] ****************************
9:29:55 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:74
9:29:55 AM
ok: [SIT-SQLP03] => changed=false
9:29:55 AM
ansible_facts:
9:29:55 AM
mariadb_installed: true
9:29:55 AM
9:29:55 AM
TASK [bodsch.mariadb : detect galera cluster] **********************************
9:29:55 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:81
9:29:55 AM
Loading collection bodsch.core from /tmp/semaphore/.ansible/collections/ansible_collections/bodsch/core
9:29:55 AM
skipping: [SIT-SQLP03] => changed=false
9:29:55 AM
false_condition: mariadb_config_galera.node_addresses is defined
9:29:55 AM
skip_reason: Conditional result was False
9:29:55 AM
9:29:55 AM
TASK [bodsch.mariadb : define mariadb_galera and mariadb_galera_primary] *******
9:29:55 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:91
9:29:55 AM
ok: [SIT-SQLP03] => changed=false
9:29:55 AM
ansible_facts:
9:29:55 AM
mariadb_galera_cluster: false
9:29:55 AM
mariadb_galera_primary: ''
9:29:55 AM
mariadb_galera_primary_node: ''
9:29:55 AM
mariadb_galera_replica_nodes: []
it seems as var is not defined. But i don't know whats wrong....
- name: "Install mariadb"
ansible.builtin.include_role:
name: bodsch.mariadb
vars:
mariadb_python_packages: "" # workaround
mariadb_version: "{{ mariadb.version }}"
mariadb_use_external_repo: false
mariadb_monitoring:
enabled: false
mariadb_mysqltuner: false
mariadb_root_username: "{{ mariadb.root_username }}"
mariadb_root_password: "{{ mariadb.root_password }}"
mariadb_root_password_update: false
mariadb_user_password_update: false
mariadb_config_mysqld:
# basic
user: mysql
pid_file: "{{ mariadb_pid_file }}"
socket: "{{ mariadb_socket }}"
datadir: /var/lib/mysql
tmpdir: /tmp
lc_messages_dir: /usr/share/mysql
skip-external-locking:
bind_address: 127.0.0.1
lower_case_table_names: "1"
event_scheduler: "ON"
# Query Cache
query_cache_type: "0"
query_cache_limit: 3M
query_cache_size: 16M
tmp_table_size: "1024"
max_heap_table_size: 64M
join_buffer_size: 262144
# Logging
log_error: /var/log/mysql/error.log
server_id: "{{ mariadb_server_id }}"
relay_log: "{{ ansible_hostname }}-relay-bin"
relay_log_index: "{{ ansible_hostname }}-relay-bin.index"
# required for Wsrep GTID Mode, needs to be same path on all nodes
log_bin: "{{ mariadb.galera.cluster_name }}-log-bin"
log_bin_index: "{{ mariadb.galera.cluster_name }}-log-bin.index"
log_bin_trust_function_creators: "1"
expire_logs_days: 10
max_relay_log_size: 100M
max_binlog_size: 100M
binlog_ignore_db: monitoring
# Character sets
character_set_server: utf8mb4
collation_server: utf8mb4_general_ci
# required for Wsrep GTID Mode
log_slave_updates: true
# timeouts
wait_timeout: 28800
interactive_timeout: 28800
mariadb_config_custom:
mariadb:
proxy_protocol_networks: "{{ mariadb.proxy_protocol_networks | join(',') }}"
mariadb_server_id: "{{ mariadb.galera.node.id }}"
mariadb_galera:
cluster_name: "{{ mariadb.galera.cluster_name }}"
node:
name: "{{ mariadb.galera.node.name }}"
id: "{{ mariadb.galera.node.id }}"
address: "{{ mariadb.galera.node.address }}"
gtid_domain_id: "{{ mariadb.galera.node.id }}"
node_addresses: "{{ mariadb.galera.node_addresses }}"
sst:
method: "{{ mariadb.galera.sst.method }}"
auth:
username: "{{ mariadb.galera.sst.auth.username }}"
password: "{{ mariadb.galera.sst.auth.password }}"
mariadb_config_galera:
# Mandatory settings
wsrep_on: "ON"
wsrep_provider: "/usr/lib/libgalera_smm.so"
wsrep_cluster_name: "{{ mariadb_galera.cluster_name }}"
wsrep_cluster_address: "gcomm://{{ mariadb_galera.node_addresses | wsrep_cluster_address() }}"
binlog_format: "row"
default_storage_engine: "InnoDB"
innodb_autoinc_lock_mode: "2"
bind-address: "0.0.0.0"
wsrep_sst_method: "{{ mariadb_galera.sst.method }}"
wsrep_sst_auth: "{{ mariadb_galera.sst.auth.username }}:{{ mariadb_galera.sst.auth.password }}"
wsrep_node_address: "{{ mariadb_galera.node.address }}"
wsrep_node_name: "{{ mariadb_galera.node.name }}"
# GTID Mode
wsrep_gtid_mode: "ON"
wsrep_gtid_domain_id: "{{ mariadb.galera.gtid_domain_id }}"
gtid_strict_mode: "1"
# Tuning:
wsrep_slave_threads: 4
innodb_flush_log_at_trx_commit: "0"
innodb_doublewrite: "1"
innodb_buffer_pool_size: "1G"
vars:
{
"galera": {
"cluster_name": "inst01.sql.services.k-sys.io",
"gtid_domain_id": 1337,
"node": {
"address": "{{ ansible_default_ipv4.address }}",
"id": 3,
"name": "{{ ansible_hostname }}"
},
"node_addresses": [
{
"address": "10.195.0.41",
"name": "SIT-SQLP01.prime.k-sys.io",
"port": 3306
},
{
"address": "10.195.0.42",
"name": "SIT-SQLP02.prime.k-sys.io",
"port": 3306
},
{
"address": "10.195.0.43",
"name": "SIT-SQLP03.prime.k-sys.io",
"port": 3306
}
],
"sst": {
"auth": {
"password": {
"__ansible_vault": "$ANSIBLE_VAULT;1.1;AES256\n62656135353564633634643634306634643539303162363132623431306432656265383138323435\n3234326132326430303534626231363564336534656134370a643130303435313166326338616563\n31363064616234306563326639616538326638653866393462306131616661303731306237633461\n3863636531346466640a616537386165323235383539636462666562313237346132663637666162\n6330"
},
"username": "sst_xtrabackup"
},
"method": "mariabackup"
}
}
}
I have merged the branch and created a new release.
The extended information can be output with -vv
.
it seems as var is not defined. But i don't know whats wrong....
My approach is as follows: I take the existing hostvars
and search for the node information of the Galera configuration.
Apparently, your hostvars
do not contain the information I expect.
Could you please take the last branch and call ansible Playbook with -vv
?
And then please send me the output of the task?
so here we go....
4:12:16 PM
Starting galaxy role install process
4:12:16 PM
- changing role bodsch.mariadb from tests/galera-filter to 2.5.1
4:12:17 PM
- extracting bodsch.mariadb to /tmp/semaphore/.ansible/roles/bodsch.mariadb
4:12:17 PM
- bodsch.mariadb (2.5.1) was installed successfully
4:12:17 PM
role/requirements.yml has no changes. Skip galaxy install process.
.....SNIP.....
4:12:43 PM
TASK [bodsch.mariadb : detect if mariadb installed] ****************************
4:12:43 PM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:74
4:12:43 PM
ok: [SIT-SQLP03] => changed=false
4:12:43 PM
ansible_facts:
4:12:43 PM
mariadb_installed: true
4:12:43 PM
4:12:43 PM
TASK [bodsch.mariadb : detect galera cluster] **********************************
4:12:43 PM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:81
4:12:43 PM
Trying secret for vault_id=default
4:12:43 PM
Trying secret for vault_id=default
4:12:43 PM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
4:12:43 PM
Trying secret for vault_id=default
4:12:43 PM
Trying secret for vault_id=default
4:12:43 PM
Trying secret for vault_id=default
4:12:43 PM
Trying secret for vault_id=default
4:12:43 PM
Loading collection bodsch.core from /tmp/semaphore/.ansible/collections/ansible_collections/bodsch/core
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
Trying secret for vault_id=default
4:12:44 PM
skipping: [SIT-SQLP03] => changed=false
4:12:44 PM
false_condition: mariadb_config_galera.node_addresses is defined
4:12:44 PM
skip_reason: Conditional result was False
4:12:44 PM
4:12:44 PM
TASK [bodsch.mariadb : define mariadb_galera and mariadb_galera_primary] *******
4:12:44 PM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:91
4:12:45 PM
ok: [SIT-SQLP03] => changed=false
4:12:45 PM
ansible_facts:
4:12:45 PM
mariadb_galera_cluster: false
4:12:45 PM
mariadb_galera_primary: ''
4:12:45 PM
mariadb_galera_primary_node: ''
4:12:45 PM
mariadb_galera_replica_nodes: []
4:12:45 PM
4:12:45 PM
TASK [bodsch.mariadb : galera packages] ****************************************
4:12:45 PM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:98
4:12:45 PM
skipping: [SIT-SQLP03] => changed=false
4:12:45 PM
false_condition: mariadb_galera_cluster
4:12:45 PM
skip_reason: Conditional result was False
....SNIP....
4:12:59 PM
TASK [bodsch.mariadb : run bootstrap on custom data directories '/var/lib/mysql'] ***
4:12:59 PM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/configure/custom-bootstrap.yml:11
4:12:59 PM
skipping: [SIT-SQLP03] => changed=false
4:12:59 PM
false_condition: mariadb_config_mysqld.datadir != "/var/lib/mysql"
4:12:59 PM
skip_reason: Conditional result was False
4:12:59 PM
4:12:59 PM
TASK [bodsch.mariadb : galera cluster] *****************************************
4:12:59 PM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/configure/main.yml:102
4:12:59 PM
skipping: [SIT-SQLP03] => changed=false
4:12:59 PM
false_condition: mariadb_galera_cluster
4:12:59 PM
skip_reason: Conditional result was False
4:12:59 PM
4:12:59 PM
TASK [bodsch.mariadb : no cluster instance] ************************************
4:12:59 PM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/configure/main.yml:107
4:12:59 PM
included: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/configure/single-instance.yml for SIT-SQLP03
4:12:59 PM
4:12:59 PM
TASK [bodsch.mariadb : start mariadb first time] *******************************
....SNIP.....
4:13:04 PM
TASK [bodsch.mariadb : warn if a custom root password is not specified] ********
4:13:04 PM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/secure-installation.yml:7
4:13:04 PM
Trying secret for vault_id=default
4:13:04 PM
skipping: [SIT-SQLP03] => changed=false
4:13:04 PM
false_condition: mariadb_root_password | length == 0
4:13:04 PM
skip_reason: Conditional result was False
4:13:04 PM
4:13:04 PM
TASK [bodsch.mariadb : wait 10 seconds to realise the message] *****************
4:13:04 PM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/secure-installation.yml:16
4:13:04 PM
Trying secret for vault_id=default
4:13:04 PM
skipping: [SIT-SQLP03] => changed=false
4:13:04 PM
false_condition: mariadb_root_password | length == 0
4:13:04 PM
skip_reason: Conditional result was False
4:13:04 PM
4:13:04 PM
TASK [bodsch.mariadb : set database root password] *****************************
4:13:04 PM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/secure-installation.yml:21
4:13:04 PM
Trying secret for vault_id=default
4:13:04 PM
<10.195.0.43> ESTABLISH SSH CONNECTION FOR USER: root
4:13:04 PM
<10.195.0.43> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/tmp/semaphore/.ansible/cp/63584418bc"' 10.195.0.43 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
4:13:04 PM
<10.195.0.43> (0, b'/root\n', b"OpenSSH_9.6p1, OpenSSL 3.1.5 30 Jan 2024\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 22: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug2: resolve_canonicalize: hostname 10.195.0.43 is address\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/semaphore/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/semaphore/.ssh/known_hosts2'\r\ndebug1: auto-mux: Trying existing master at '/tmp/semaphore/.ansible/cp/63584418bc'\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28644\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet_timeout: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n")
4:13:04 PM
<10.195.0.43> ESTABLISH SSH CONNECTION FOR USER: root
4:13:04 PM
<10.195.0.43> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/tmp/semaphore/.ansible/cp/63584418bc"' 10.195.0.43 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813 `" && echo ansible-tmp-1725286384.6540363-28923-111255723209813="` echo /root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813 `" ) && sleep 0'"'"''
4:13:04 PM
<10.195.0.43> (0, b'ansible-tmp-1725286384.6540363-28923-111255723209813=/root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813\n', b"OpenSSH_9.6p1, OpenSSL 3.1.5 30 Jan 2024\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 22: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug2: resolve_canonicalize: hostname 10.195.0.43 is address\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/semaphore/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/semaphore/.ssh/known_hosts2'\r\ndebug1: auto-mux: Trying existing master at '/tmp/semaphore/.ansible/cp/63584418bc'\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28644\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet_timeout: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n")
4:13:05 PM
Using module file /tmp/semaphore/.ansible/roles/bodsch.mariadb/library/mariadb_root_password.py
4:13:05 PM
<10.195.0.43> PUT /tmp/semaphore/.ansible/tmp/ansible-local-286221ndranes/tmpzciekqrs TO /root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813/AnsiballZ_mariadb_root_password.py
4:13:05 PM
<10.195.0.43> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/tmp/semaphore/.ansible/cp/63584418bc"' '[10.195.0.43]'
4:13:05 PM
<10.195.0.43> (0, b'sftp> put /tmp/semaphore/.ansible/tmp/ansible-local-286221ndranes/tmpzciekqrs /root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813/AnsiballZ_mariadb_root_password.py\n', b'OpenSSH_9.6p1, OpenSSL 3.1.5 30 Jan 2024\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 22: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug2: resolve_canonicalize: hostname 10.195.0.43 is address\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts\' -> \'/home/semaphore/.ssh/known_hosts\'\r\ndebug3: expanded UserKnownHostsFile \'~/.ssh/known_hosts2\' -> \'/home/semaphore/.ssh/known_hosts2\'\r\ndebug1: auto-mux: Trying existing master at \'/tmp/semaphore/.ansible/cp/63584418bc\'\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28644\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "posix-rename@openssh.com" revision 1\r\ndebug2: Server supports extension "statvfs@openssh.com" revision 2\r\ndebug2: Server supports extension "fstatvfs@openssh.com" revision 2\r\ndebug2: Server supports extension "hardlink@openssh.com" revision 1\r\ndebug2: Server supports extension "fsync@openssh.com" revision 1\r\ndebug2: Server supports extension "lsetstat@openssh.com" revision 1\r\ndebug2: Server supports extension "limits@openssh.com" revision 1\r\ndebug2: Server supports extension "expand-path@openssh.com" revision 1\r\ndebug2: Server supports extension "copy-data" revision 1\r\ndebug2: Unrecognised server extension "home-directory"\r\ndebug2: Server supports extension "users-groups-by-id@openssh.com" revision 1\r\ndebug3: Sent message limits@openssh.com I:1\r\ndebug3: Received limits reply T:201 I:1\r\ndebug3: server upload/download buffer sizes 261120 / 261120; using 261120 / 261120\r\ndebug3: server handle limit 1019; using 64\r\ndebug2: Sending SSH2_FXP_REALPATH "."\r\ndebug3: Sent message fd 3 T:16 I:2\r\ndebug3: SSH2_FXP_REALPATH . -> /root\r\ndebug3: Looking up /tmp/semaphore/.ansible/tmp/ansible-local-286221ndranes/tmpzciekqrs\r\ndebug2: Sending SSH2_FXP_STAT "/root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813/AnsiballZ_mariadb_root_password.py"\r\ndebug3: Sent message fd 3 T:17 I:3\r\ndebug1: stat remote: No such file or directory\r\ndebug2: sftp_upload: upload local "/tmp/semaphore/.ansible/tmp/ansible-local-286221ndranes/tmpzciekqrs" to remote "/root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813/AnsiballZ_mariadb_root_password.py"\r\ndebug2: Sending SSH2_FXP_OPEN "/root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813/AnsiballZ_mariadb_root_password.py"\r\ndebug3: Sent dest message SSH2_FXP_OPEN I:4 P:/root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813/AnsiballZ_mariadb_root_password.py M:0x001a\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:0 S:126378\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 126378 bytes at 0\r\ndebug3: Sent message SSH2_FXP_CLOSE I:5\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet_timeout: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
4:13:05 PM
<10.195.0.43> ESTABLISH SSH CONNECTION FOR USER: root
4:13:05 PM
<10.195.0.43> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/tmp/semaphore/.ansible/cp/63584418bc"' 10.195.0.43 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813/ /root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813/AnsiballZ_mariadb_root_password.py && sleep 0'"'"''
4:13:05 PM
<10.195.0.43> (0, b'', b"OpenSSH_9.6p1, OpenSSL 3.1.5 30 Jan 2024\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 22: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug2: resolve_canonicalize: hostname 10.195.0.43 is address\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/semaphore/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/semaphore/.ssh/known_hosts2'\r\ndebug1: auto-mux: Trying existing master at '/tmp/semaphore/.ansible/cp/63584418bc'\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28644\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet_timeout: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n")
4:13:05 PM
<10.195.0.43> ESTABLISH SSH CONNECTION FOR USER: root
4:13:05 PM
<10.195.0.43> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/tmp/semaphore/.ansible/cp/63584418bc"' -tt 10.195.0.43 '/bin/sh -c '"'"'/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813/AnsiballZ_mariadb_root_password.py && sleep 0'"'"''
4:13:05 PM
<10.195.0.43> (0, b'\r\n{"failed": true, "msg": " / \\u0007/usr/bin/mysqladmin: unable to change password; error: \'SET PASSWORD is ignored for users authenticating via unix_socket plugin\'\\n", "invocation": {"module_args": {"dba_root_username": "root", "dba_root_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "dba_socket": "/run/mysqld/mysqld.sock", "dba_config_directory": "/etc/mysql", "mycnf_file": "/root/.my.cnf", "dba_bind_address": null}}}\r\n', b"OpenSSH_9.6p1, OpenSSL 3.1.5 30 Jan 2024\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 22: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug2: resolve_canonicalize: hostname 10.195.0.43 is address\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/semaphore/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/semaphore/.ssh/known_hosts2'\r\ndebug1: auto-mux: Trying existing master at '/tmp/semaphore/.ansible/cp/63584418bc'\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28644\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet_timeout: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to 10.195.0.43 closed.\r\n")
4:13:05 PM
<10.195.0.43> ESTABLISH SSH CONNECTION FOR USER: root
4:13:05 PM
<10.195.0.43> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o 'ControlPath="/tmp/semaphore/.ansible/cp/63584418bc"' 10.195.0.43 '/bin/sh -c '"'"'rm -f -r /root/.ansible/tmp/ansible-tmp-1725286384.6540363-28923-111255723209813/ > /dev/null 2>&1 && sleep 0'"'"''
4:13:05 PM
<10.195.0.43> (0, b'', b"OpenSSH_9.6p1, OpenSSL 3.1.5 30 Jan 2024\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 22: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug2: resolve_canonicalize: hostname 10.195.0.43 is address\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/semaphore/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/semaphore/.ssh/known_hosts2'\r\ndebug1: auto-mux: Trying existing master at '/tmp/semaphore/.ansible/cp/63584418bc'\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 28644\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet_timeout: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n")
4:13:05 PM
fatal: [SIT-SQLP03]: FAILED! => changed=false
4:13:05 PM
invocation:
4:13:05 PM
module_args:
4:13:05 PM
dba_bind_address: null
4:13:05 PM
dba_config_directory: /etc/mysql
4:13:05 PM
dba_root_password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
4:13:05 PM
dba_root_username: root
4:13:05 PM
dba_socket: /run/mysqld/mysqld.sock
4:13:05 PM
mycnf_file: /root/.my.cnf
4:13:05 PM
msg: |2-
4:13:05 PM
/ /usr/bin/mysqladmin: unable to change password; error: 'SET PASSWORD is ignored for users authenticating via unix_socket plugin'
4:13:05 PM
4:13:05 PM
PLAY RECAP *********************************************************************
4:13:05 PM
SIT-SQLP03 : ok=45 changed=0 unreachable=0 failed=1 skipped=23 rescued=0 ignored=0
4:13:05 PM
4:13:05 PM
Running playbook failed: exit status 2
Hope it helps!
Moin @KlettIT !
This looks like a dictionary is empty, which should not be empty.
Hence the debug output.
Did you also call the playbook with -vv
?
Can you please add this line to your ansible.cfg in the defaults section?
[defaults]
stdout_callback: yaml
This should make the output much more readable.
I have fixed an (embarrassing) error in a when clause.
Please try the release 2.5.2
11:07:39 AM
TASK [bodsch.mariadb : set database root password] *****************************
11:07:39 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/secure-installation.yml:21
11:07:40 AM
fatal: [SIT-SQLP01]: FAILED! => changed=false
11:07:40 AM
msg: |2-
11:07:40 AM
/ /usr/bin/mysqladmin: unable to change password; error: 'SET PASSWORD is ignored for users authenticating via unix_socket plugin'
11:07:40 AM
fatal: [SIT-SQLP03]: FAILED! => changed=false
11:07:40 AM
msg: |2-
11:07:40 AM
/ /usr/bin/mysqladmin: unable to change password; error: 'SET PASSWORD is ignored for users authenticating via unix_socket plugin'
11:07:40 AM
fatal: [SIT-SQLP02]: FAILED! => changed=false
11:07:40 AM
msg: |2-
11:07:40 AM
/ /usr/bin/mysqladmin: unable to change password; error: 'SET PASSWORD is ignored for users authenticating via unix_socket plugin'
Everything else looks good. Do need the output of a specific task ?
Hi @KlettIT !
You used the 2.5.2 release?
I need the outputs of the following tasks:
detect galera cluster
and define mariadb_galera and mariadb_galera_primary
galera cluster state
yes, is used 2.52
11:06:48 AM
TASK [bodsch.mariadb : detect galera cluster] **********************************
11:06:48 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:81
11:06:49 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
11:06:49 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
11:06:49 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
11:06:51 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
11:06:52 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
11:06:52 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
11:06:54 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
11:06:54 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
11:06:55 AM
- SIT-SQLP01
11:06:55 AM
facts default_ipv4 : {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.41', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:18', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:55 AM
facts default_ipv6 : {}
11:06:55 AM
ansible default_ipv4: {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.41', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:18', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:55 AM
ansible default_ipv6: {}
11:06:55 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
11:06:55 AM
detect_galera({'wsrep_on': 'ON', 'wsrep_provider': '/usr/lib/libgalera_smm.so', 'wsrep_cluster_name': 'inst01.sql.services.k-sys.io', 'wsrep_cluster_address': 'gcomm://10.195.0.41,10.195.0.42,10.195.0.43', 'binlog_format': 'row', 'default_storage_engine': 'InnoDB', 'innodb_autoinc_lock_mode': '2', 'bind-address': '0.0.0.0', 'wsrep_sst_method': 'mariabackup', 'wsrep_sst_auth': 'sst_xtrabackup:Eiveiy3a', 'wsrep_node_address': '10.195.0.43', 'wsrep_node_name': 'SIT-SQLP03', 'wsrep_gtid_mode': 'ON', 'wsrep_gtid_domain_id': '1337', 'gtid_strict_mode': '1', 'wsrep_slave_threads': 4, 'innodb_flush_log_at_trx_commit': '0', 'innodb_doublewrite': '1', 'innodb_buffer_pool_size': '1G'}, hostvars)
11:06:55 AM
detect_galera({'wsrep_on': 'ON', 'wsrep_provider': '/usr/lib/libgalera_smm.so', 'wsrep_cluster_name': 'inst01.sql.services.k-sys.io', 'wsrep_cluster_address': 'gcomm://10.195.0.41,10.195.0.42,10.195.0.43', 'binlog_format': 'row', 'default_storage_engine': 'InnoDB', 'innodb_autoinc_lock_mode': '2', 'bind-address': '0.0.0.0', 'wsrep_sst_method': 'mariabackup', 'wsrep_sst_auth': 'sst_xtrabackup:Eiveiy3a', 'wsrep_node_address': '10.195.0.41', 'wsrep_node_name': 'SIT-SQLP01', 'wsrep_gtid_mode': 'ON', 'wsrep_gtid_domain_id': '1337', 'gtid_strict_mode': '1', 'wsrep_slave_threads': 4, 'innodb_flush_log_at_trx_commit': '0', 'innodb_doublewrite': '1', 'innodb_buffer_pool_size': '1G'}, hostvars)
11:06:56 AM
- SIT-SQLP01
11:06:56 AM
facts default_ipv4 : {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.41', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:18', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
facts default_ipv6 : {}
11:06:56 AM
ansible default_ipv4: {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.41', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:18', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
ansible default_ipv6: {}
11:06:56 AM
- SIT-SQLP02
11:06:56 AM
facts default_ipv4 : {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.42', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:13', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
facts default_ipv6 : {}
11:06:56 AM
ansible default_ipv4: {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.42', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:13', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
ansible default_ipv6: {}
11:06:56 AM
- SIT-SQLP03
11:06:56 AM
facts default_ipv4 : {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.43', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:04', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
facts default_ipv6 : {}
11:06:56 AM
ansible default_ipv4: {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.43', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:04', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
ansible default_ipv6: {}
11:06:56 AM
- cluster_members: '['10.195.0.41', '10.195.0.42', '10.195.0.43']' : 3
11:06:56 AM
node_information: '{'SIT-SQLP01': '10.195.0.41', 'SIT-SQLP02': '10.195.0.42', 'SIT-SQLP03': '10.195.0.43'}'
11:06:56 AM
ok: [SIT-SQLP02] => changed=false
11:06:56 AM
ansible_facts:
11:06:56 AM
_mariadb_galera_cluster:
11:06:56 AM
cluster_members:
11:06:56 AM
- 10.195.0.41
11:06:56 AM
- 10.195.0.42
11:06:56 AM
- 10.195.0.43
11:06:56 AM
cluster_primary_node: SIT-SQLP01
11:06:56 AM
cluster_replica_nodes:
11:06:56 AM
- SIT-SQLP02
11:06:56 AM
- SIT-SQLP03
11:06:56 AM
galera: true
11:06:56 AM
primary: false
11:06:56 AM
11:06:56 AM
TASK [bodsch.mariadb : define mariadb_galera and mariadb_galera_primary] *******
11:06:56 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:91
11:06:56 AM
- SIT-SQLP02
11:06:56 AM
facts default_ipv4 : {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.42', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:13', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
facts default_ipv6 : {}
11:06:56 AM
ansible default_ipv4: {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.42', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:13', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
ansible default_ipv6: {}
11:06:56 AM
- SIT-SQLP03
11:06:56 AM
facts default_ipv4 : {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.43', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:04', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
facts default_ipv6 : {}
11:06:56 AM
ansible default_ipv4: {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.43', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:04', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
ansible default_ipv6: {}
11:06:56 AM
- cluster_members: '['10.195.0.41', '10.195.0.42', '10.195.0.43']' : 3
11:06:56 AM
- SIT-SQLP01
11:06:56 AM
facts default_ipv4 : {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.41', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:18', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
facts default_ipv6 : {}
11:06:56 AM
ansible default_ipv4: {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.41', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:18', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
ansible default_ipv6: {}
11:06:56 AM
node_information: '{'SIT-SQLP01': '10.195.0.41', 'SIT-SQLP02': '10.195.0.42', 'SIT-SQLP03': '10.195.0.43'}'
11:06:56 AM
- SIT-SQLP02
11:06:56 AM
facts default_ipv4 : {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.42', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:13', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
facts default_ipv6 : {}
11:06:56 AM
ansible default_ipv4: {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.42', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:13', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
ansible default_ipv6: {}
11:06:56 AM
ok: [SIT-SQLP03] => changed=false
11:06:56 AM
ansible_facts:
11:06:56 AM
_mariadb_galera_cluster:
11:06:56 AM
cluster_members:
11:06:56 AM
- 10.195.0.41
11:06:56 AM
- 10.195.0.42
11:06:56 AM
- 10.195.0.43
11:06:56 AM
cluster_primary_node: SIT-SQLP01
11:06:56 AM
cluster_replica_nodes:
11:06:56 AM
- SIT-SQLP02
11:06:56 AM
- SIT-SQLP03
11:06:56 AM
galera: true
11:06:56 AM
primary: false
11:06:56 AM
- SIT-SQLP03
11:06:56 AM
facts default_ipv4 : {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.43', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:04', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
facts default_ipv6 : {}
11:06:56 AM
ansible default_ipv4: {'gateway': '10.195.3.254', 'interface': 'enp3s0', 'address': '10.195.0.43', 'broadcast': '10.195.3.255', 'netmask': '255.255.252.0', 'network': '10.195.0.0', 'prefix': '22', 'macaddress': '00:1a:4a:16:03:04', 'mtu': 1500, 'type': 'ether', 'alias': 'enp3s0'}
11:06:56 AM
ansible default_ipv6: {}
11:06:56 AM
- cluster_members: '['10.195.0.41', '10.195.0.42', '10.195.0.43']' : 3
11:06:56 AM
node_information: '{'SIT-SQLP01': '10.195.0.41', 'SIT-SQLP02': '10.195.0.42', 'SIT-SQLP03': '10.195.0.43'}'
11:06:56 AM
ok: [SIT-SQLP01] => changed=false
11:06:56 AM
ansible_facts:
11:06:56 AM
_mariadb_galera_cluster:
11:06:56 AM
cluster_members:
11:06:56 AM
- 10.195.0.41
11:06:56 AM
- 10.195.0.42
11:06:56 AM
- 10.195.0.43
11:06:56 AM
cluster_primary_node: SIT-SQLP01
11:06:56 AM
cluster_replica_nodes:
11:06:56 AM
- SIT-SQLP02
11:06:56 AM
- SIT-SQLP03
11:06:56 AM
galera: true
11:06:56 AM
primary: false
11:06:56 AM
detect_galera({'wsrep_on': 'ON', 'wsrep_provider': '/usr/lib/libgalera_smm.so', 'wsrep_cluster_name': 'inst01.sql.services.k-sys.io', 'wsrep_cluster_address': 'gcomm://10.195.0.41,10.195.0.42,10.195.0.43', 'binlog_format': 'row', 'default_storage_engine': 'InnoDB', 'innodb_autoinc_lock_mode': '2', 'bind-address': '0.0.0.0', 'wsrep_sst_method': 'mariabackup', 'wsrep_sst_auth': 'sst_xtrabackup:Eiveiy3a', 'wsrep_node_address': '10.195.0.42', 'wsrep_node_name': 'SIT-SQLP02', 'wsrep_gtid_mode': 'ON', 'wsrep_gtid_domain_id': '1337', 'gtid_strict_mode': '1', 'wsrep_slave_threads': 4, 'innodb_flush_log_at_trx_commit': '0', 'innodb_doublewrite': '1', 'innodb_buffer_pool_size': '1G'}, hostvars)
11:06:57 AM
ok: [SIT-SQLP01] => changed=false
11:06:57 AM
ansible_facts:
11:06:57 AM
mariadb_galera_cluster: true
11:06:57 AM
mariadb_galera_primary: false
11:06:57 AM
mariadb_galera_primary_node: SIT-SQLP01
11:06:57 AM
mariadb_galera_replica_nodes:
11:06:57 AM
- SIT-SQLP02
11:06:57 AM
- SIT-SQLP03
11:06:57 AM
ok: [SIT-SQLP02] => changed=false
11:06:57 AM
ansible_facts:
11:06:57 AM
mariadb_galera_cluster: true
11:06:57 AM
mariadb_galera_primary: false
11:06:57 AM
mariadb_galera_primary_node: SIT-SQLP01
11:06:57 AM
mariadb_galera_replica_nodes:
11:06:57 AM
- SIT-SQLP02
11:06:57 AM
- SIT-SQLP03
11:06:57 AM
ok: [SIT-SQLP03] => changed=false
11:06:57 AM
ansible_facts:
11:06:57 AM
mariadb_galera_cluster: true
11:06:57 AM
mariadb_galera_primary: false
11:06:57 AM
mariadb_galera_primary_node: SIT-SQLP01
11:06:57 AM
mariadb_galera_replica_nodes:
11:06:57 AM
- SIT-SQLP02
11:06:57 AM
- SIT-SQLP03
11:07:36 AM
TASK [bodsch.mariadb : get galera cluster state] *******************************
11:07:36 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/configure/galera-cluster.yml:75
11:07:37 AM
ok: [SIT-SQLP03] => changed=false
11:07:37 AM
executed_queries:
11:07:37 AM
- SHOW status WHERE Variable_Name IN ('wsrep_ready', 'wsrep_cluster_status', 'wsrep_connected','wsrep_cluster_size')
11:07:37 AM
query_result:
11:07:37 AM
- - Value: '3'
11:07:37 AM
Variable_name: wsrep_cluster_size
11:07:37 AM
- Value: Primary
11:07:37 AM
Variable_name: wsrep_cluster_status
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_connected
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_ready
11:07:37 AM
rowcount:
11:07:37 AM
- 4
11:07:37 AM
ok: [SIT-SQLP01] => changed=false
11:07:37 AM
executed_queries:
11:07:37 AM
- SHOW status WHERE Variable_Name IN ('wsrep_ready', 'wsrep_cluster_status', 'wsrep_connected','wsrep_cluster_size')
11:07:37 AM
query_result:
11:07:37 AM
- - Value: '3'
11:07:37 AM
Variable_name: wsrep_cluster_size
11:07:37 AM
- Value: Primary
11:07:37 AM
Variable_name: wsrep_cluster_status
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_connected
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_ready
11:07:37 AM
rowcount:
11:07:37 AM
- 4
11:07:37 AM
ok: [SIT-SQLP02] => changed=false
11:07:37 AM
executed_queries:
11:07:37 AM
- SHOW status WHERE Variable_Name IN ('wsrep_ready', 'wsrep_cluster_status', 'wsrep_connected','wsrep_cluster_size')
11:07:37 AM
query_result:
11:07:37 AM
- - Value: '3'
11:07:37 AM
Variable_name: wsrep_cluster_size
11:07:37 AM
- Value: Primary
11:07:37 AM
Variable_name: wsrep_cluster_status
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_connected
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_ready
11:07:37 AM
rowcount:
11:07:37 AM
- 4
11:07:37 AM
11:07:37 AM
TASK [bodsch.mariadb : galera cluster state] ***********************************
11:07:37 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/configure/galera-cluster.yml:81
11:07:37 AM
ok: [SIT-SQLP01] =>
11:07:37 AM
msg:
11:07:37 AM
- - Value: '3'
11:07:37 AM
Variable_name: wsrep_cluster_size
11:07:37 AM
- Value: Primary
11:07:37 AM
Variable_name: wsrep_cluster_status
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_connected
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_ready
11:07:37 AM
ok: [SIT-SQLP02] =>
11:07:37 AM
msg:
11:07:37 AM
- - Value: '3'
11:07:37 AM
Variable_name: wsrep_cluster_size
11:07:37 AM
- Value: Primary
11:07:37 AM
Variable_name: wsrep_cluster_status
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_connected
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_ready
11:07:37 AM
ok: [SIT-SQLP03] =>
11:07:37 AM
msg:
11:07:37 AM
- - Value: '3'
11:07:37 AM
Variable_name: wsrep_cluster_size
11:07:37 AM
- Value: Primary
11:07:37 AM
Variable_name: wsrep_cluster_status
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_connected
11:07:37 AM
- Value: 'ON'
11:07:37 AM
Variable_name: wsrep_ready
Good morning @KlettIT !
I think I've found the error.
In my configuration, I define the bind-address
under mariadb_config_galera
with the respective IP of the primary network interface of the node.
This also defines in the filter whether the node is the primary Galera node or not.
Your config contains bind-address: ‘0.0.0.0’
, so there is no primary node and the house of cards collapses.
The quick workaround now would be to change the configuration: bind-address: ‘{{ ansible_default_ipv4.address }}’
However, I will definitely do my filter again to make it more robust.
I have rebuilt the filter in Release 2.5.3.
This should actually work now (fingers crossed)
thanks! i will have look on monday.
Still the password cannot be set. Is the bind-address workaround a must or should this be fixed in 2.5.3?
However this is the output of the 2.5.3 run:
10:25:11 AM
roles/requirements.yml | 2 +-
10:25:11 AM
1 file changed, 1 insertion(+), 1 deletion(-)
10:25:11 AM
installing static inventory
10:25:11 AM
collection/requirements.yml has no changes. Skip galaxy install process.
10:25:11 AM
collection/requirements.yml has no changes. Skip galaxy install process.
10:25:11 AM
Starting galaxy role install process
10:25:11 AM
- changing role bodsch.mariadb from 2.5.2 to 2.5.3
10:25:12 AM
- extracting bodsch.mariadb to /tmp/semaphore/.ansible/roles/bodsch.mariadb
10:25:12 AM
- bodsch.mariadb (2.5.3) was installed successfully
10:25:12 AM
role/requirements.yml has no changes. Skip galaxy install process.
----
10:25:44 AM
TASK [bodsch.mariadb : detect galera cluster] **********************************
10:25:44 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:81
10:25:44 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
10:25:44 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
10:25:45 AM
wsrep_cluster_address([{'name': 'SIT-SQLP01.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.41'}, {'name': 'SIT-SQLP02.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.42'}, {'name': 'SIT-SQLP03.prime.k-sys.io', 'port': 3306, 'address': '10.195.0.43'}])
10:25:45 AM
detect_galera({'wsrep_on': 'ON', 'wsrep_provider': '/usr/lib/libgalera_smm.so', 'wsrep_cluster_name': 'inst01.sql.services.k-sys.io', 'wsrep_cluster_address': 'gcomm://10.195.0.41,10.195.0.42,10.195.0.43', 'binlog_format': 'row', 'default_storage_engine': 'InnoDB', 'innodb_autoinc_lock_mode': '2', 'bind-address': '0.0.0.0', 'wsrep_sst_method': 'mariabackup', 'wsrep_sst_auth': 'sst_xtrabackup:Eiveiy3a', 'wsrep_node_address': '10.195.0.41', 'wsrep_node_name': 'SIT-SQLP01', 'wsrep_gtid_mode': 'ON', 'wsrep_gtid_domain_id': '1337', 'gtid_strict_mode': '1', 'wsrep_slave_threads': 4, 'innodb_flush_log_at_trx_commit': '0', 'innodb_doublewrite': '1', 'innodb_buffer_pool_size': '1G'}, hostvars)
10:25:46 AM
- 10.195.0.41
10:25:46 AM
- 10.195.0.42
10:25:46 AM
- 10.195.0.43
10:25:46 AM
cluster_primary_node: SIT-SQLP01
10:25:46 AM
cluster_replica_nodes:
10:25:46 AM
- SIT-SQLP02
10:25:46 AM
- SIT-SQLP03
10:25:46 AM
galera: true
10:25:46 AM
TASK [bodsch.mariadb : define mariadb_galera and mariadb_galera_cluster] *******
10:25:46 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/prepare.yml:99
10:25:46 AM
ok: [SIT-SQLP01] => changed=false
10:25:46 AM
ansible_facts:
10:25:46 AM
mariadb_galera_cluster: true
10:25:46 AM
mariadb_galera_primary_node: SIT-SQLP01
10:25:46 AM
mariadb_galera_replica_nodes:
10:25:46 AM
- SIT-SQLP02
10:25:46 AM
- SIT-SQLP03
10:25:46 AM
ok: [SIT-SQLP02] => changed=false
10:25:46 AM
ansible_facts:
10:25:46 AM
mariadb_galera_cluster: true
10:25:46 AM
mariadb_galera_primary_node: SIT-SQLP01
10:25:46 AM
mariadb_galera_replica_nodes:
10:25:46 AM
- SIT-SQLP02
10:25:46 AM
- SIT-SQLP03
10:25:46 AM
ok: [SIT-SQLP03] => changed=false
10:25:46 AM
ansible_facts:
10:25:46 AM
mariadb_galera_cluster: true
10:25:46 AM
mariadb_galera_primary_node: SIT-SQLP01
10:25:46 AM
mariadb_galera_replica_nodes:
10:25:46 AM
- SIT-SQLP02
10:25:46 AM
- SIT-SQLP03
10:26:18 AM
TASK [bodsch.mariadb : get galera cluster state] *******************************
10:26:18 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/configure/galera-cluster.yml:75
10:26:18 AM
skipping: [SIT-SQLP02] => changed=false
10:26:18 AM
false_condition: not galera_bootstraped_database
10:26:18 AM
skip_reason: Conditional result was False
10:26:18 AM
skipping: [SIT-SQLP03] => changed=false
10:26:18 AM
false_condition: not galera_bootstraped_database
10:26:18 AM
skip_reason: Conditional result was False
10:26:19 AM
TASK [bodsch.mariadb : galera cluster state] ***********************************
10:26:19 AM
task path: /tmp/semaphore/.ansible/roles/bodsch.mariadb/tasks/configure/galera-cluster.yml:81
10:26:19 AM
ok: [SIT-SQLP01] =>
10:26:19 AM
msg:
10:26:19 AM
- - Value: '3'
10:26:19 AM
Variable_name: wsrep_cluster_size
10:26:19 AM
- Value: Primary
10:26:19 AM
Variable_name: wsrep_cluster_status
10:26:19 AM
- Value: 'ON'
10:26:19 AM
Variable_name: wsrep_connected
10:26:19 AM
- Value: 'ON'
10:26:19 AM
Variable_name: wsrep_ready
10:26:19 AM
ok: [SIT-SQLP02] =>
10:26:19 AM
msg:
10:26:19 AM
- - Value: '3'
10:26:19 AM
Variable_name: wsrep_cluster_size
10:26:19 AM
- Value: Primary
10:26:19 AM
Variable_name: wsrep_cluster_status
10:26:19 AM
- Value: 'ON'
10:26:19 AM
Variable_name: wsrep_connected
10:26:19 AM
- Value: 'ON'
10:26:19 AM
Variable_name: wsrep_ready
10:26:20 AM
ok: [SIT-SQLP03] =>
10:26:20 AM
msg:
10:26:20 AM
- - Value: '3'
10:26:20 AM
Variable_name: wsrep_cluster_size
10:26:20 AM
- Value: Primary
10:26:20 AM
Variable_name: wsrep_cluster_status
10:26:20 AM
- Value: 'ON'
10:26:20 AM
Variable_name: wsrep_connected
10:26:20 AM
- Value: 'ON'
10:26:20 AM
Variable_name: wsrep_ready
Yes, the fix is in 2.5.3 and should solve the problem.
I define the mariadb_galera variable only once and then use the Ansible hostname to determine the primary cluster node.
I think I need the complete Ansible output again.
Can you please send it to my email address?
That should make the whole thing a bit easier.
You got mail.
Hi,
first of all - thanks for your awesome role!
Sadly i getting an error when deploying mariadb > 10.4
This also happens even when i set the root password to
empty
. This probably due changes in 10.4 -- > https://mariadb.com/kb/en/set-password/