Add a "Pinning" tab w/ an option to "pin the guest to a numa node" which would provide a pull down box allowing user to choose a single host numa node. If a node is selected, pinning statements would be added to the guest XML based on the host numa topology.
1) The user would get a pull down box with the options: 0, 1, 16, or 17 (default to "no pinning"). If the user chooses a host node, let's say 16 for this example, add pinning statements to the XML:
2) add a vcpu cpuset based on host topology:
cat /sys/devices/system/node/node16/cpulist
40,48,56,64,72
add "cpuset" to the vcpu definition, ex:
X
3) create a guest numa cell id:
[...]
In this case, cell id will always be set to 0 since this feature will only require a single guest node.
The "cpus" list will be the guest logical vcpu numbering, for instance if the user asks for 5 vcpus, this would be: cpus='0-4'.
5) Include a warning/error if the number of cores or amount of memory requested exceeds what is available in that node on the host (would have to account for SMT mode if that is requested for the guest). For ex. could check:
Add a "Pinning" tab w/ an option to "pin the guest to a numa node" which would provide a pull down box allowing user to choose a single host numa node. If a node is selected, pinning statements would be added to the guest XML based on the host numa topology.
1) The user would get a pull down box with the options: 0, 1, 16, or 17 (default to "no pinning"). If the user chooses a host node, let's say 16 for this example, add pinning statements to the XML:
2) add a vcpu cpuset based on host topology:
cat /sys/devices/system/node/node16/cpulist
40,48,56,64,72
add "cpuset" to the vcpu definition, ex:
3) create a guest numa cell id:
In this case, cell id will always be set to 0 since this feature will only require a single guest node.
The "cpus" list will be the guest logical vcpu numbering, for instance if the user asks for 5 vcpus, this would be: cpus='0-4'.
5) Include a warning/error if the number of cores or amount of memory requested exceeds what is available in that node on the host (would have to account for SMT mode if that is requested for the guest). For ex. could check:
cat /sys/devices/system/node/node16/meminfo | grep MemFree