What steps will reproduce the problem?
1. 3 DB nodes, all running, separate Manager node
2. MHA Manager only running on Manager node
3. masterha_master_switch --master_state=alive
--global_conf=/etc/masterha_default.cnf --conf=/etc/app1.cnf
What is the expected output? What do you see instead?
Tue Apr 22 16:59:29 2014 - [info] Current Alive Master:
eng-mysqlha-p2(10.50.2.29:3306)
Tue Apr 22 16:59:29 2014 - [info] Alive Slaves:
Tue Apr 22 16:59:29 2014 - [info] eng-mysqlha-p1(10.50.2.28:3306)
Version=10.0.10-MariaDB-log (oldest major version between slaves)
log-bin:enabled
Tue Apr 22 16:59:29 2014 - [info] Replicating from
10.50.2.29(10.50.2.29:3306)
Tue Apr 22 16:59:29 2014 - [info] Primary candidate for the new Master
(candidate_master is set)
Tue Apr 22 16:59:29 2014 - [info] eng-mysqlha-p3(10.50.2.30:3306)
Version=10.0.10-MariaDB-log (oldest major version between slaves)
log-bin:enabled
Tue Apr 22 16:59:29 2014 - [info] Replicating from
10.50.2.29(10.50.2.29:3306)
Tue Apr 22 16:59:29 2014 - [info] Primary candidate for the new Master
(candidate_master is set)
It is better to execute FLUSH NO_WRITE_TO_BINLOG TABLES on the master before
switching. Is it ok to execute on eng-mysqlha-p2(10.50.2.29:3306)? (YES/no): YES
Tue Apr 22 17:00:04 2014 - [info] Executing FLUSH NO_WRITE_TO_BINLOG TABLES.
This may take long time..
Tue Apr 22 17:00:04 2014 - [info] ok.
Tue Apr 22 17:00:04 2014 - [info] Checking MHA is not monitoring or doing
failover..
Tue Apr 22 17:00:04 2014 -
[error][/usr/lib64/perl5/vendor_perl/MHA/MasterRotate.pm, ln142] Getting
advisory lock failed on the current master. MHA Monitor runs on the current
master. Stop MHA Manager/Monitor and try again.
Tue Apr 22 17:00:04 2014 -
[error][/usr/lib64/perl5/vendor_perl/MHA/ManagerUtil.pm, ln177] Got ERROR: at
/usr/bin/masterha_master_switch line 53
What version of the product are you using? On what operating system?
Centos 6.5, MariaDB 10.0.10, MHA 0.56
Please provide any additional information below.
If the master is shut down, then master_switch --master_state=dead succeeds but
if the previous master is started up again it comes up as a second master.
[root@eng-mysqlmon-p1 masterha]# cat /etc/app1.cnf
[server default]
manager_workdir=/tmp
manager_log=/var/log/masterha/app1.log
remote_workdir=/tmp
[server1]
hostname=eng-mysqlha-p1
candidate_master=1
ignore_fail=1
[server2]
hostname=eng-mysqlha-p2
candidate_master=1
ignore_fail=1
[server3]
hostname=eng-mysqlha-p3
candidate_master=1
ignore_fail=1
Original issue reported on code.google.com by thornr...@gmail.com on 23 Apr 2014 at 12:03
Original issue reported on code.google.com by
thornr...@gmail.com
on 23 Apr 2014 at 12:03