Closed saelbrec closed 8 years ago
Fixed by #962, packaged in openvstorage-2.7.3-rev.4029.63e298f
Only expecting logging on the remaining online nodes: On node 2: /var/log/upstart/ovs-volumedriver_myvpool02.log
2016-10-18 14:40:36 581307 +0200 - ovs-node2 - 8773/0x00007f6d897fa700 - volumedriverfs/XMLRPCTimingWrapper - 0000000000000197 - info - execute: Arguments for volumeDriverPerformanceCounters are {[reset:false,vrouter_cluster_id:96943cb9-8eb8-4b95-a078-ac372ef78880,vrouter_id:myvpool02vVZrpKNCRNhCUt6x]}
2016-10-18 14:40:36 581377 +0200 - ovs-node2 - 8773/0x00007f6d897fa700 - volumedriverfs/XMLRPCRedirectWrapper - 0000000000000198 - info - execute: Execution on myvpool02vVZrpKNCRNhCUt6x requested
2016-10-18 14:40:36 581401 +0200 - ovs-node2 - 8773/0x00007f6d897fa700 - volumedriverfs/XMLRPCRedirectWrapper - 0000000000000199 - info - execute: Node ID myvpool02vVZrpKNCRNhCUt6x is ours - good
2016-10-18 14:40:36 581469 +0200 - ovs-node2 - 8773/0x00007f6d897fa700 - volumedriverfs/XMLRPCTimingWrapper - 000000000000019a - info - execute: Call volumeDriverPerformanceCounters took 0.000102 seconds
/var/log/upstart/ovs-volumedriver_myvpool01.log
2016-10-18 14:40:39 809732 +0200 - ovs-node2 - 7561/0x00007f274b7fe700 - volumedriverfs/XMLRPCTimingWrapper - 000000000000017b - info - execute: Arguments for markNodeOffline are {[vrouter_cluster_id:f12fde62-2c3e-43b0-9fff-c2737e5dd3d4,vrouter_id:myvpool01xp3PGW3kCmcnn6Oe]}
2016-10-18 14:40:39 810220 +0200 - ovs-node2 - 7561/0x00007f274b7fe700 - volumedriverfs/ClusterRegistry - 000000000000017c - info - operator(): Updating state of node myvpool01xp3PGW3kCmcnn6Oe from Online to Offline
2016-10-18 14:40:39 812822 +0200 - ovs-node2 - 7561/0x00007f274b7fe700 - volumedriverfs/LockedArakoon - 000000000000017d - info - run_sequence: set node state succeeded after 1 attempt(s)
2016-10-18 14:40:39 812886 +0200 - ovs-node2 - 7561/0x00007f274b7fe700 - volumedriverfs/XMLRPCTimingWrapper - 000000000000017e - info - execute: Call markNodeOffline took 0.003088 seconds
Both SDM's were informed about being marked offline and the logging was found in the appropriate files. Test passed.
Error from ovs-workers
Apparently the markOfflineNode calls for the different storagedrivers end up on the same vpool although they belong to a different one. Resulting in the following logs and error in the volumedriver