esunar / test4

GNU General Public License v3.0
0 stars 0 forks source link

Report when nova-compute charm vcpu-pin-set is configured and doesn't match isolcpus kernel cmdline #207

Open esunar opened 1 year ago

esunar commented 1 year ago

As identified during a recent failure of newly configured SRIOV nodes, system and VM network/storage performance can be severely impacted by factors related to CPU pinning.

If a nova-compute charmed application has vcpu-pin-set configuration set to other than default or blank, juju-lint should:

  1. Connect to each host for that nova-compute application and check that the /proc/cmdline contains an "isolcpus" configuration and that the isolcpus configuration matches that of the vcpu-pin-set.

  2. Connect to each host for that nova-compute application and check that the vcpu-pin-set ranges are within valid ranges for the number of cpus reported by lscpu. (meaning you shouldn't have vcpu-pin-set=10-20 on a 10 cpu machine.)

  3. That there is a minimum number of cpus left available by the isolcpus for our converged architecture to function properly (something like 10~20% of cpus should NOT be isolated).

  4. CPU architecture of pinning should ensure that the pinned cpus follow a pattern of either completely isolating all cpus of a numa core, or that cpus pinned per numa core are equal across all numa cores.

  5. If vcpu-pin-set is configured on nova-compute charms, we should expect that there is a sysconfig charm with an isolcpus setting and warn if isolcpus is being set due to MAAS tag kernel cmdline options instead of charmed configuration, as MAAS tags are not a viable day 2 configuration method for kernel command line management.


Imported from Launchpad using lp2gh.