Closed BlackZork closed 1 year ago
Hi @BlackZork,
can you please confirm that you are using latest and greatest fence-virtd from fence-agents.git and not the old one from fence-virt.git?
Thanks
If you can please also share the full configuration, that would be ideal.
@BlackZork I was able to reproduce your problem and I do have a solution that will allow you to move forward.
@oalbrigt we will need to fix at least fence_virt.conf documentation here, possibly expand a bit the acl parser to be easier to digest.
In my setting I have:
# fence_xvm -o list
rhel8-node1 473cf1cd-6ac4-4eda-9b37-910e2e34ba77 on
rhel8-node2 8f925322-151e-4666-a5fb-68acc57ff4e6 on
My fence_virt.conf looks like this:
fence_virtd {
listener = "multicast";
backend = "libvirt";
}
listeners {
multicast {
key_file = "/home/fabbione/work/authkey";
address = "225.0.0.12";
# Needed on Fedora systems
interface = "br0";
}
}
backends {
libvirt {
uri = "qemu:///system";
}
}
groups {
group {
ip = "192.168.8.41";
ip = "192.168.8.42";
uuid = "rhel8-node1";
uuid = "rhel8-node2";
uuid = "473cf1cd-6ac4-4eda-9b37-910e2e34ba77";
uuid = "8f925322-151e-4666-a5fb-68acc57ff4e6";
}
}
You can have multiple source IP address for each group (to avoid repeating it over and over with uuid.
UUID can be either the libvirt VM uuid or the VM name as known by libvirt. This last bit is completely missing from the man page.
In my example I did set both name and UUID in the config and I can execute any command from within the VMs.
[root@rhel8-node1 cluster]# fence_xvm -o status -H rhel8-node3 Permission denied [root@rhel8-node1 cluster]# fence_xvm -o status -H rhel8-node2 Status: ON [root@rhel8-node1 cluster]# fence_xvm -o status -H rhel8-node1 Status: ON [root@rhel8-node1 cluster]# fence_xvm -o status -H 473cf1cd-6ac4-4eda-9b37-910e2e34ba77 Status: ON
[root@rhel8-node2 ~]# fence_xvm -o list rhel8-node1 473cf1cd-6ac4-4eda-9b37-910e2e34ba77 on rhel8-node2 8f925322-151e-4666-a5fb-68acc57ff4e6 on [root@rhel8-node2 ~]# fence_xvm -o status -H rhel8-node1 Status: ON [root@rhel8-node2 ~]# fence_xvm -o status -H 473cf1cd-6ac4-4eda-9b37-910e2e34ba77 Status: ON [root@rhel8-node2 ~]# fence_xvm -o status -H rhel8-node3 Permission denied
AFAICT the code also allows for named groups, and it should be possible to have a 'vm = "$vmname"' option to be more explicit. Internally in the code, can just map to uuid.
PR to update manpage: https://github.com/ClusterLabs/fence-agents/pull/516
Many thanks for your reply. I was using old fence-virt.git repo. I need to fix Arch package first, it will take some time.
@BlackZork what repo exactly?
https://github.com/ClusterLabs/fence-virt was moved to fence-agents over 2 years ago, and archived. is there another repo out there that we don´t know about?
I used fence-virt-git. It is outdated clone. There are two AUR packages that build fence-agents for use with pacemaker on nodes: stable and master. fence_xvm is not included.
I plan to talk to maintainer to include fence_xvm and create separate package for fence_virtd.
So sorry for the noise, it looks like this ArchLinux related issue, not a fence-agents bug.
The code between the old and new fence_virtd is very similar. Not much has changed. So that part doesn´t worry me. The issue you found is real and reproducible also in the latest and greatest, hence we are updating the man page to reflect the correct config.
I compiled latest git version (v4.11.0.r84.geec1042b), changed configuration as suggested and fencing works.
BUT I added VM names to group only, and from cluster node fence_xvm -o list returns empty list.
status, on, off works as expected.
@BlackZork thanks for the report. I will take a look at it next week. For now just use both node names and uuid as workaround.
Hi,
I've configured fence-libvirtd 0.4 (build on arch linux from git) on host (kvm+qemu+libvirt) with libvirtd backend and multicast listener.
I would like to restrict which machine can fence which I cannot figure out how to do it. I added "groups" section this way:
If I comment out entire groups section, then host1 and host2 sees all machines and can fence any of them.
If I uncomment groups section, then both host1 and host2 can issue
fence_xvm -o list
and it returns only two hosts defined in group, but any attempt to control vm (off, reboot, status) gives "Permission denied" message.I ran fence_virtd with -d 9 and In logs I see:
fence_xvm -o list from host2 (outputs list, ok).
fence_xvm -o status -H host2 (outputs "Permission denied")
What am I missing to successfully configure groups where members can fence each other?