What steps will reproduce the problem?
1. start the mmm_agent of db1 normally
2. start the mmm_agent of db2 normally
3. start the mmm_mon of mornitor normally
What is the expected output? What do you see instead?
For db1 & db2 log output
"4917: Listener: Waiting for connection..."
[root@db2 var]# netstat -tlnp |grep 9989
tcp 0 0 192.168.4.147:9989 0.0.0.0:*
LISTEN 4979/perl
For mornitor:
[root@mysql etc]# /usr/local/mmm/scripts/init.d/mmm_mon start
Daemon bin: '/usr/local/mmm/sbin/mmmd_mon'
Daemon pid: '/usr/local/mmm/var/mmmd.pid'
Starting MMM Monitor daemon: MySQL Multi-Master Replication Manager
Version: 1.2.6
Reading config file: 'mmm_mon.conf'
$VAR1 = {
'db2' => {
'roles' => [
'reader(192.168.4.149;)'
],
'version' => '0',
'state' => 'ONLINE'
},
'db1' => {
'roles' => [
'reader(192.168.148;)',
'writer(192.168.4.240;)'
],
'version' => '0',
'state' => 'ONLINE'
}
};
Role: 'reader(192.168.4.149;)'
Adding role: 'reader' with ip '192.168.4.149' to host 'db2'
Role: 'reader(192.168.148;)'
Adding role: 'reader' with ip '192.168.148' to host 'db1'
Role: 'writer(192.168.4.240;)'
Adding role: 'writer' with ip '192.168.4.240' to host 'db1'
Ok
[root@mysql etc]# mmm_control show
MySQL Multi-Master Replication Manager
Version: 1.2.6
Config file: mmm_mon.conf
Daemon is running!
===============================
Cluster failover method: AUTO
===============================
Servers status:
db1(192.168.4.146): master/ONLINE. Roles: reader(192.168.148;), writer(192.168.4.240;)
db2(192.168.4.147): master/ONLINE. Roles: reader(192.168.4.149;)
But from the mornitor log:
I see:
[2011-04-25 17:06:19]: 24478: Sending command 'SET_STATUS(db2, 0, ONLINE,
reader(192.168.4.149;), db1)' to 192.168.4.147:9989
[2011-04-25 17:06:19]: 24478: CHECKER: rep_threads: OK
[2011-04-25 17:06:19]: 24478: Pinging checker 'rep_threads'
[2011-04-25 17:06:19]: 24478: Checker 'rep_threads' is OK (OK: Pong!)
[2011-04-25 17:06:19]: 24478: Daemon: Error sending status command to db2.
[2011-04-25 17:06:19]: 24478: Sending status to 'db1'
[2011-04-25 17:06:19]: 24478: CHECKER: rep_threads: OK
[2011-04-25 17:06:19]: 24478: Sending command 'SET_STATUS(db1, 0, ONLINE,
reader(192.168.148;),writer(192.168.4.240;), db1)' to 192.168.4.146:9989
[2011-04-25 17:06:19]: 24478: Daemon: Error sending status command to db1.
[2011-04-25 17:06:19]: 24478: Pinging checker 'ping'
[2011-04-25 17:06:19]: 24478: Checker 'ping' is OK (OK: Pong!)
[2011-04-25 17:06:19]: 24478: CHECKER: ping: OK
[2011-04-25 17:06:19]: 24478: Pinging checker 'ping'
[2011-04-25 17:06:19]: 24478: Checker 'ping' is OK (OK: Pong!)
[2011-04-25 17:06:19]: 24478: CHECKER: ping: OK
Then I check the mmm_agent of db1 & db2:
The mmm_agent process was exited. I am not sure why!
What version of the product are you using? On what operating system?
MySQL: 4.0.26
MMM: mysql-master-master-1.2.6
OS: centos 4.6
Please provide any additional information below.
When i start the mmm_mon process, then the mmm_agent process was exited.
But when executed:
[root@mysql etc]# mmm_control show
MySQL Multi-Master Replication Manager
Version: 1.2.6
Config file: mmm_mon.conf
Daemon is running!
===============================
Cluster failover method: AUTO
===============================
Servers status:
db1(192.168.4.146): master/ONLINE. Roles: reader(192.168.148;), writer(192.168.4.240;)
db2(192.168.4.147): master/ONLINE. Roles: reader(192.168.4.149;)
But the virtual IP of 192.168.4.148, 192.168.4.149, 192.168.4.240 wasn't add in
each server.
Why ? so strange!
Original issue reported on code.google.com by jeffreym...@gmail.com on 25 Apr 2011 at 9:53
Original issue reported on code.google.com by
jeffreym...@gmail.com
on 25 Apr 2011 at 9:53