olopez32 / ganeti

Automatically exported from code.google.com/p/ganeti
0 stars 0 forks source link

instance-wide kvm cpu pinning doesn't work #1024

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
What software version are you running? Please provide the output of "gnt-
cluster --version", "gnt-cluster version", and "hspace --version".

gnt-cluster --version:
gnt-cluster (ganeti v2.12.0) 2.12.0

gnt-cluster version:
Software version: 2.12.0
Internode protocol: 2120000
Configuration format: 2120000
OS api version: 20
Export interface: 0
VCS version: (ganeti) version v2.12.0

hspace --version: 
hspace (ganeti) version v2.12.0
compiled with ghc 7.6
running on linux x86_64

What distribution are you using?
Ubuntu 12.04 (precise)

What steps will reproduce the problem?
1. gnt-instance modify -H cpu_mask=0 my-kvm-instance
2. gnt-instance reboot my-kvm-instance

What is the expected output? What do you see instead?
Expected output: in htop, all kvm vcpu threads running on cpu 0
Actual output: in htop, kvm helper threads are running on cpu 0, but vcpu 
threads are running on all cpus

Please provide any additional information below.
For each instance, kvm spawns a single process with n vcpu threads (where n is 
the number of vcpus the instance has) and some number of additional worker 
threads. The worker threads inherit the cpu affinity of the parent process, but 
the vcpu threads do not. Thus, when an entire instance is pinned to a single 
core (or set of cores), but helper threads run on those cores, but the actual 
vcpus still run on all cores.

To fix this, it is necessary to pin each individual cpu thread. For example, 
this will result in my-kvm-instance's vcpus actually running on cpu 0 (assuming 
my-kvm-instance has 4 vcpus):
gnt-instance modify -H cpu_mask=0:0:0:0 my-kvm-instance
However, the parent process and the worker threads will still run on all vcpus.

The patch below (against master) will cause the vcpu threads (along with the 
parent process and worker threads) to be pinned to the correct cores when the 
entire instance is pinned to (a set of) core(s). However, allowing individual 
vcpu-to-core mappings and also a parent process/worker thread mapping would 
require changing the cpu_mask syntax or adding a separate hypervisor param 
(kvm_worker_cpu_mask or something like that).

diff --git a/lib/hypervisor/hv_kvm/__init__.py 
b/lib/hypervisor/hv_kvm/__init__.py
index 340ddb1..13de7f0 100644
--- a/lib/hypervisor/hv_kvm/__init__.py
+++ b/lib/hypervisor/hv_kvm/__init__.py
@@ -750,6 +750,8 @@ class KVMHypervisor(hv_base.BaseHypervisor):
         # If CPU pinning has one non-all entry, map the entire VM to
         # one set of physical CPUs
         cls._SetProcessAffinity(process_id, all_cpu_mapping)
+        for vcpu in thread_dict:
+          cls._SetProcessAffinity(thread_dict[vcpu], all_cpu_mapping)
     else:
       # The number of vCPUs mapped should match the number of vCPUs
       # reported by KVM. This was already verified earlier, so

Original issue reported on code.google.com by raspu...@google.com on 29 Jan 2015 at 11:12

GoogleCodeExporter commented 9 years ago

Original comment by hel...@google.com on 30 Jan 2015 at 8:54

GoogleCodeExporter commented 9 years ago

Original comment by aeh...@google.com on 29 Apr 2015 at 1:44