Closed evanfoster closed 3 years ago
/cc @c3d
I should also mention that the thread fairness seems pretty wacky. It seems like worker threads are given too much precedence. The worker threads seem to do more work in Kata, but creating new threads takes far, far longer, like the main thread isn't getting cycles.
EDIT: The above is just conjecture, however.
How many CPUs does the pod see -- I'm not referring to CPU requests or limits -- inside the Kata vs. runc container? The sysctl output from the runc container shows 32 CPUs under kernel.sched_domain, but the Kata sysctl output shows nothing.
In Kata, the pod sees whatever the limit is (so 8 in one test, 1 in the other). In runc, the pod sees the actual number of CPU cores on the host (32 in Azure, I think 128 in AWS). Not sure why kernel.sched_domain
isn't showing anything inside of Kata. That's quite weird...
Interesting that there are differing values in ulimit, ie: runc:
process 1048576
nofiles 1048576
kata:
process 7947
nofiles 1073741816
I'm wondering if it'd make sense to update your crio configuration to set this so its consistent?
Interesting that there are differing values in ulimit, ie: runc:
process 1048576 nofiles 1048576
kata:
process 7947 nofiles 1073741816
I'm wondering if it'd make sense to update your crio configuration to set this so its consistent?
Hmm. Both tests were run sequentially on the same node with the same CRI-O config, just with a different runtimeClassName
. Not sure what's causing the discrepancy there.
It would be really interesting to see the output of lscpu
in both containers, in addition to the contents of /sys/fs/cgroup/cpu/cpu.shares
.
EDIT: This is from the test that only exposes 1 CPU core (test-thread-creation-teardown). I will note that openJDK checks the cgroup before checking the number of processors. If there's a limit applied at the cgroup level then it ignores the number of procs.
Kata:
root@thread-creation-teardown-test-kata:/# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.824
BogoMIPS: 4589.64
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat umip md_clear arch_capabilities
root@thread-creation-teardown-test-kata:/# cat /sys/fs/cgroup/cpu/cpu.shares
1024
runc:
root@thread-creation-teardown-test-runc:/# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.825
BogoMIPS: 4589.65
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 51200K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt md_clear
root@thread-creation-teardown-test-runc:/# cat /sys/fs/cgroup/cpu/cpu.shares
1024
This is from the test that only exposes 1 CPU core (test-thread-creation-teardown). I will note that openJDK checks the cgroup before checking the number of processors. If there's a limit applied at the cgroup level then it ignores the number of procs.
Would you like the results from the 8 core test (thread-yield-test)?
EDIT: Here's the same data from the 8 core test (thread-yield-test):
Kata:
root@thread-yield-test-kata:/# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 9
On-line CPU(s) list: 0-8
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 9
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.824
BogoMIPS: 4589.64
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat umip md_clear arch_capabilities
root@thread-yield-test-kata:/# cat /sys/fs/cgroup/cpu/cpu.shares
8192
runc:
root@thread-yield-test-runc:/# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.825
BogoMIPS: 4589.65
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 51200K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt md_clear
root@thread-yield-test-runc:/# cat /sys/fs/cgroup/cpu/cpu.shares
8192
Please make the "run" instance variable in ThreadTest (https://github.com/evanfoster/kata-java-benchmarks/blob/main/ThreadTest.java#L11) volatile and update the numbers, otherwise there will be unpredictable delays in getting your threads to terminate.
@mnmehta I've updated the test. I'm spinning up my Azure and AWS test clusters now. I'll provide updated results once I have them.
I remember running into the ulimit
discrepancy in the past, and having to correct that in manifests, e.g. for Jenkins. I will try to remember what I traced that back to and put a link here. Assigning to myself for now.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-kata
labels:
app: jenkins
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
replicas: 8
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
runtimeClassName: kata
containers:
- name: jenkins
image: jenkins/jenkins
command: [ "bash" ]
args: [ "-c", "ulimit -n 5000; ulimit -a; /usr/local/bin/jenkins.sh" ]
resources:
limits:
memory: "3000Mi"
requests:
memory: "2000Mi"
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
Please make the "run" instance variable in ThreadTest (https://github.com/evanfoster/kata-java-benchmarks/blob/main/ThreadTest.java#L11) volatile and update the numbers, otherwise there will be unpredictable delays in getting your threads to terminate.
Hey @mnmehta, apologies that this took so long to generate! Had a couple of rough days.
Here's the raw data (which includes things like ulimit and sysctl output): https://gist.github.com/evanfoster/6deab087f8c18f208c460daf715ddc10
Here are the results after pulling them out and cleaning them up (thread creation and teardown only includes the worst case result):
AWS Kata (thread creation and teardown):
{
"threadCreationTest2": "start",
"repeats" : 2,
"threads" : 1024,
"sleep" : 2,
"priority" : 10,
"setupTotal" : 905,
"teardownTotal" : 72,
"overallTotal" : 977
}
AWS runc (thread creation and teardown):
{
"threadCreationTest2": "start",
"repeats" : 2,
"threads" : 1024,
"sleep" : 2,
"priority" : 10,
"setupTotal" : 1693,
"teardownTotal" : 614,
"overallTotal" : 2307
}
Azure Kata (thread creation and teardown):
{
"threadCreationTest2": "start",
"repeats" : 2,
"threads" : 1024,
"sleep" : 2,
"priority" : 10,
"setupTotal" : 503381,
"teardownTotal" : 120,
"overallTotal" : 503501
}
Azure runc (thread creation and teardown):
{
"threadCreationTest2": "start",
"repeats" : 2,
"threads" : 1024,
"sleep" : 2,
"priority" : 10,
"setupTotal" : 2508,
"teardownTotal" : 199,
"overallTotal" : 2707
}
AWS Kata (thread yield test):
threads total_ms perthread_ms total_iterations
1 1.0 1.0 0
2 0.0 0.0 2
4 0.0 0.0 5
8 1.0 0.125 23
16 49.0 3.0625 24584
32 161.0 5.03125 83143
64 1495.0 23.359375 753366
128 6027.0 47.0859375 3030165
256 19410.0 75.8203125 9701221
512 54141.0 105.744140625 27024516
1024 207794.0 202.923828125 103460392
2048 422438.0 206.2685546875 211768041
AWS runc (thread yield test):
threads total_ms perthread_ms total_iterations
1 1.0 1.0 0
2 0.0 0.0 2
4 0.0 0.0 5
8 1.0 0.125 26
16 1.0 0.0625 844
32 3.0 0.09375 4471
64 6.0 0.09375 20267
128 1356.0 10.59375 759518
256 5298.0 20.6953125 2571878
512 31896.0 62.296875 15690242
1024 121082.0 118.244140625 56947795
2048 611689.0 298.67626953125 319274922
Azure Kata (thread yield test):
threads total_ms perthread_ms total_iterations
1 3.0 3.0 0
2 1.0 0.5 4
4 3.0 0.75 200
8 11.0 1.375 1740
16 39.0 2.4375 20267
32 322.0 10.0625 158514
64 1990.0 31.09375 945239
128 4957.0 38.7265625 2446591
256 21184.0 82.75 10028896
512 45128.0 88.140625 21688878
1024 182810.0 178.525390625 86612100
2048 784528.0 383.0703125 370664012
Azure runc (thread yield test):
threads total_ms perthread_ms total_iterations
1 1.0 1.0 0
2 0.0 0.0 2
4 1.0 0.25 4
8 2.0 0.25 27
16 3.0 0.1875 877
32 13.0 0.40625 22037
64 560.0 8.75 274929
128 2986.0 23.328125 1407261
256 10999.0 42.96484375 5223520
512 32288.0 63.0625 15509476
1024 88801.0 86.7197265625 43347168
2048 270199.0 131.93310546875 128721491
It looks like that did make a difference when running on AWS bare metal at high thread counts. Azure looks to be relatively unchanged.
@c3d @RobertKrawitz Any other ideas on this issue?
@c3d your pod definition doesn't have a CPU request/limit?
@evanfoster could you provide lscpu
output for all four combinations (AWS kata, AWS runc, Azure kata, Azure runc)? There are two other things I'm thinking about:
Hey @RobertKrawitz ,
I now include lscpu
output in every test I run. The test output is super verbose (https://gist.github.com/evanfoster/6deab087f8c18f208c460daf715ddc10) so let me extract it for easier reading:
AWS bare metal -- Kata
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.990
BogoMIPS: 4999.98
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku avx512_vnni md_clear arch_capabilities
AWS bare metal -- runc
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2417.827
CPU max MHz: 3500.0000
CPU min MHz: 1200.0000
BogoMIPS: 5000.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Azure nested -- Kata
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.686
BogoMIPS: 4589.37
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
L3 cache: 16384K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat umip md_clear arch_capabilities
Azure nested -- runc
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2294.688
BogoMIPS: 4589.37
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 51200K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti tpr_shadow vnmi ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt md_clear
So those are big differences in the machine sizes (virtual or otherwise). The Kata pods are getting 2 CPUs, which is what I expect for a Kata pod with no CPU request (one core for the pod, one core for the rest of the VM). Can you use a much higher CPU request on the Kata pods to see if that makes a difference?
Sure thing! I just need to spin up my clusters again. It'll be a bit, but I'll do that and post the full results.
This issue is being automatically closed as Kata Containers 1.x has now reached EOL (End of Life). This means it is no longer being maintained.
Important:
All users should switch to the latest Kata Containers 2.x release to ensure they are using a maintained release that contains the latest security fixes, performance improvements and new features.
This decision was discussed by the @kata-containers/architecture-committee and has been announced via the Kata Containers mailing list:
If you believe this issue still applies to Kata Containers 2.x, please open an issue against the Kata Containers 2.x repository, pointing to this one, providing details to allow us to migrate it.
Description of problem
A team I've been helping has been trying to port their application over to Kata containers, but ran into severe performance issues. The application is a large Java monolith which runs very large thread pools (3-400 at idle, 8-1200 under load), and also creates ad-hoc threads that are latency sensitive.
After an extensive testing effort, this team found that thread creation can be orders of magnitude slower inside of a Kata container when there are already threads causing load on the CPU. The team simplified their benchmarks and made them available. I have packaged those tests up here: https://github.com/evanfoster/kata-java-benchmarks
The tests have been pushed to Dockerhub, so anyone that wishes to can run these tests against a live cluster using the Make targets documented in the repo.
Expected result
Thread creation time inside of a Kata container should be within an order of magnitude of a runc container.
Actual result
Thread creation time inside of a Kata container can be at least 2 orders of magnitude slower than in a runc container under the right circumstances.
Further information
The output of
sysctl -a
andulimit -a
have been added to the output produced bykata-collect-data.sh
. That data can be found below:Show kata-collect-data.sh details
# Meta details Running `kata-collect-data.sh` version `1.11.5 (commit 0b7413f5cf6296ecde3a1dae0ced9993ad3cd800)` at `2021-03-19.17:20:22.612876024+0000`. --- Runtime is `/opt/kata/bin/kata-runtime`. # `kata-env` Output of "`/opt/kata/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.24" [Runtime] Debug = false Trace = false DisableGuestSeccomp = true DisableNewNetNs = false SandboxCgroupOnly = false Path = "/opt/kata/bin/kata-runtime" [Runtime.Version] OCI = "1.0.1-dev" [Runtime.Version.Version] Semver = "1.11.5" Major = 1 Minor = 11 Patch = 5 Commit = "0b7413f5cf6296ecde3a1dae0ced9993ad3cd800" [Runtime.Config] Path = "/etc/kata-containers/configuration.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 5.0.0 (kata-static)\nCopyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers" Path = "/opt/kata/bin/qemu-virtiofs-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" SharedFS = "virtio-fs" VirtioFSDaemon = "/opt/kata/bin/virtiofsd" Msize9p = 8192 MemorySlots = 50 PCIeRootPort = 0 HotplugVFIOOnRootBus = false Debug = false UseVSock = true [Image] Path = "/opt/kata/share/kata-containers/kata-containers-image_clearlinux_1.11.5_agent_bdbea6619a.img" [Kernel] Path = "/opt/kata/share/kata-containers/vmlinuz-kata-v5.6-april-09-2020-76-virtiofs" Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none" [Initrd] Path = "" [Proxy] Type = "noProxy" Path = "" Debug = false [Proxy.Version] Semver = "" Major = 0 Minor = 0 Patch = 0 Commit = "" [Shim] Type = "kataShim" Path = "/opt/kata/libexec/kata-containers/kata-shim" Debug = false [Shim.Version] Semver = "1.11.5-0dee5597440a2f42c854f2cfbdfaaec02b4db3f4" Major = 1 Minor = 11 Patch = 5 Commit = "<>"
[Agent]
Type = "kata"
Debug = false
Trace = false
TraceMode = ""
TraceType = ""
[Host]
Kernel = "5.4.92-flatcar"
Architecture = "amd64"
VMContainerCapable = true
SupportVSocks = true
[Host.Distro]
Name = "Flatcar Container Linux by Kinvolk"
Version = "2605.12.0"
[Host.CPU]
Vendor = "GenuineIntel"
Model = "Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz"
[Netmon]
Path = "/opt/kata/libexec/kata-containers/kata-netmon"
Debug = false
Enable = false
[Netmon.Version]
Semver = "1.11.5"
Major = 1
Minor = 11
Patch = 5
Commit = "<>"
```
---
# Runtime config files
## Runtime default config files
```
/etc/kata-containers/configuration.toml
/opt/kata/share/defaults/kata-containers/configuration.toml
```
## Runtime config file contents
Output of "`cat "/etc/kata-containers/configuration.toml"`":
```toml
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu-virtiofs.toml.in"
# XXX: Project:
# XXX: Name: Kata Containers
# XXX: Type: kata
[hypervisor.qemu]
path = "/opt/kata/bin/qemu-virtiofs-system-x86_64"
kernel = "/opt/kata/share/kata-containers/vmlinuz-virtiofs.container"
image = "/opt/kata/share/kata-containers/kata-containers.img"
machine_type = "pc"
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""
# Default number of vCPUs per SB/VM:
# unspecified or 0 --> will be set to 1
# < 0 --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores
default_vcpus = 1
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to 1
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = 1
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
# Ethos: Some applications with a high number of containers per pod break when the default memory slot value is used (10)
memory_slots = 50
# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false
# Shared file system type:
# - virtio-fs (default)
# - virtio-9p
shared_fs = "virtio-fs"
# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/opt/kata/bin/virtiofsd"
# Default size of DAX cache in MiB
# Ethos: Benchmarking has shown that 4Gi is probably the smallest cache we'll want to use.
# TODO: Find out if this is using memory.
virtio_fs_cache_size = 4096
# Extra args for virtiofsd daemon
#
# Format example:
# ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = []
# Cache mode:
#
# - none
# Metadata, data, and pathname lookup are not cached in guest. They are
# always fetched from host and any changes are immediately pushed to host.
#
# - auto
# Metadata and pathname lookup cache expires after a configured amount of
# time (default is 1 second). Data is cached while the file is open (close
# to open consistency).
#
# - always
# Metadata, data, and pathname lookup are cached in guest and never expire.
# Ethos: auto is probably the best option for us since we're passing data from the host to
# the guest. There are some data consistency issues when using "always" mode, but performance
# is too low when using "none". We do take a decent hit to random write performance (about 32%),
# but data consistency and filesystem correctness take precedent.
virtio_fs_cache = "auto"
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"
# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true
# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true
# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true
# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false
# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true
# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true
# Enable vhost-user storage device, default false
# Enabling this will result in some Linux reserved block type
# major range 240-254 being chosen to represent vhost-user devices.
enable_vhost_user_store = false
# The base directory specifically used for vhost-user devices.
# Its sub-path "block" is used for block devices; "block/sockets" is
# where we expect vhost-user sockets to live; "block/devices" is where
# simulated block device nodes for vhost-user devices to live.
vhost_user_store_path = "/var/run/kata-containers/vhost-user"
# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""
# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
#enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = 8192
# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
# Ethos: vsock cannot be enabled on VMware guests that are running VMware tools. We need to run VMware tools, so disable vsock.
# See https://github.com/kata-containers/documentation/blob/master/design/VSocks.md#with-vmware-guest
use_vsock = true
# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
# Default false
#disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance.
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy. If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true
# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"
# The number of caches of VMCache:
# unspecified or == 0 --> VMCache is disabled
# > 0 --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket. The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client. It will request gRPC format
# VM and convert it back to a VM. If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0
# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
[proxy.kata]
path = "/opt/kata/libexec/kata-containers/kata-proxy"
# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[shim.kata]
path = "/opt/kata/libexec/kata-containers/kata-shim"
# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true
# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true
[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true
# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
# setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
# will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
# full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"
# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
# - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
# * A kernel module is specified and the modprobe command is not installed in the guest
# or it fails loading the module.
# * The module is not available in the guest or it doesn't met the guest kernel
# requirements, like architecture and version.
#
kernel_modules=[]
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true
# Specify the path to the netmon binary.
path = "/opt/kata/libexec/kata-containers/kata-netmon"
# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - bridged (Deprecated)
# Uses a linux bridge to interconnect the container interface to
# the VM. Works for most cases except macvlan and ipvlan.
# ***NOTE: This feature has been deprecated with plans to remove this
# feature in the future. Please use other network models listed below.
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
#
# - none
# Used when customize network. Only creates a tap device. No veth pair.
#
# - tcfilter
# Uses tc filter rules to redirect traffic from the network interface
# provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"
# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false
# If enabled, the runtime will not create Kubernetes emptyDir mounts on the guest filesystem. Instead, emptyDir mounts will
# be created on the host and shared via 9p. This is far slower, but allows sharing of files from host to guest.
disable_guest_empty_dir = true
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=[]
```
Output of "`cat "/opt/kata/share/defaults/kata-containers/configuration.toml"`" elided.
Config file `/usr/share/defaults/kata-containers/configuration.toml` not found
---
# KSM throttler
## version
Output of "` --version`":
```
/opt/kata/bin/kata-collect-data.sh: line 178: --version: command not found
```
## systemd service
# Image details
```yaml
---
osbuilder:
url: "https://github.com/kata-containers/osbuilder"
version: "unknown"
rootfs-creation-time: "2020-11-12T07:21:07.055389604+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
name: "Clear"
version: "33940"
packages:
default:
- "chrony"
- "iptables-bin"
- "kmod-bin"
- "libudev0-shim"
- "systemd"
- "util-linux-bin"
extra:
agent:
url: "https://github.com/kata-containers/agent"
name: "kata-agent"
version: "1.11.5-bdbea6619adfd6374986228ab09d219b93018ba5"
agent-is-init-daemon: "no"
```
---
# Initrd details
No initrd
---
# Logfiles
## Runtime logs
No recent runtime problems found in system journal.
## Proxy logs
No recent proxy problems found in system journal.
## Shim logs
No recent shim problems found in system journal.
## Throttler logs
No recent throttler problems found in system journal.
---
# Container manager details
Have `docker`
Eliding Docker output, since only CRI-O is used.
Have `kubectl`
## Kubernetes
Output of "`kubectl version`":
```
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Error from server (NotFound): the server could not find the requested resource
```
Output of "`kubectl config view`":
```
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
```
Output of "`systemctl show kubelet`":
```
Type=simple
Restart=on-failure
NotifyAccess=none
RestartUSec=5s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
TimeoutAbortUSec=1min 30s
TimeoutStartFailureMode=terminate
TimeoutStopFailureMode=terminate
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=2608
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ReloadResult=success
CleanResult=success
UID=[not set]
GID=[not set]
NRestarts=0
OOMPolicy=stop
ExecMainStartTimestamp=Mon 2021-03-15 15:27:04 UTC
ExecMainStartTimestampMonotonic=118199857
ExecMainExitTimestampMonotonic=0
ExecMainPID=2608
ExecMainCode=0
ExecMainStatus=0
Slice=system.slice
ControlGroup=/system.slice/kubelet.service
MemoryCurrent=281808896
CPUUsageNSec=56783387388355
EffectiveCPUs=
EffectiveMemoryNodes=
TasksCurrent=44
IPIngressBytes=[no data]
IPIngressPackets=[no data]
IPEgressBytes=[no data]
IPEgressPackets=[no data]
IOReadBytes=18446744073709551615
IOReadOperations=18446744073709551615
IOWriteBytes=18446744073709551615
IOWriteOperations=18446744073709551615
Delegate=no
CPUAccounting=yes
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
CPUQuotaPeriodUSec=infinity
AllowedCPUs=
AllowedMemoryNodes=
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=yes
DefaultMemoryLow=0
DefaultMemoryMin=0
MemoryMin=0
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=154538
IPAccounting=no
EnvironmentFiles=/run/ethos/kubelet-args (ignore_errors=no)
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=524288
LimitNOFILESoft=1024
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=515129
LimitNPROCSoft=515129
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=515129
LimitSIGPENDINGSoft=515129
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
RootHashSignature=
OOMScoreAdjust=0
CoredumpFilter=0x33
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
CPUAffinity=
CPUAffinityFromNUMA=no
NUMAPolicy=n/a
NUMAMask=
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
LogRateLimitIntervalUSec=0
LogRateLimitBurst=0
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectClock=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectKernelLogs=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
PrivateMounts=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
TimeoutCleanUSec=infinity
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictSUIDSGID=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
ProtectHostname=no
KillMode=process
KillSignal=15
RestartKillSignal=15
FinalKillSignal=9
SendSIGKILL=yes
SendSIGHUP=no
WatchdogSignal=6
Id=kubelet.service
Names=kubelet.service
Requires=docker.service crio.service configure-kubelet.service download-certificates.service sysinit.target configure-docker.service system.slice coreos-metadata.service
Wants=configure-kubelet.service
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=systemd-journald.socket nvidia-driver.service crio.service configure-docker.service configure-kubelet.service basic.target system.slice mnt-nvme.mount download-certificates.service sysinit.target docker.service coreos-metadata.service
Description=Kubernetes Kubelet
LoadState=loaded
ActiveState=active
FreezerState=running
SubState=running
FragmentPath=/etc/systemd/system/kubelet.service
DropInPaths=/etc/systemd/system/kubelet.service.d/11-ecr-credentials.conf
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Mon 2021-03-15 15:27:04 UTC
StateChangeTimestampMonotonic=118199946
InactiveExitTimestamp=Mon 2021-03-15 15:27:04 UTC
InactiveExitTimestampMonotonic=118167646
ActiveEnterTimestamp=Mon 2021-03-15 15:27:04 UTC
ActiveEnterTimestampMonotonic=118199946
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
CanFreeze=yes
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Mon 2021-03-15 15:27:04 UTC
ConditionTimestampMonotonic=118165852
AssertTimestamp=Mon 2021-03-15 15:27:04 UTC
AssertTimestampMonotonic=118165854
Transient=no
Perpetual=no
StartLimitIntervalUSec=10s
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=1ab4d59a883343e58483065af2307149
CollectMode=inactive
```
Have `crio`
## crio
Output of "`crio --version`":
```
crio version 1.17.5
commit: "6b97f815cfbdf680a7ddf2435291fb7e49776ef1"
```
Output of "`systemctl show crio`":
```
Type=notify
Restart=always
NotifyAccess=main
RestartUSec=100ms
TimeoutStartUSec=infinity
TimeoutStopUSec=1min 30s
TimeoutAbortUSec=1min 30s
TimeoutStartFailureMode=terminate
TimeoutStopFailureMode=terminate
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=1835
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ReloadResult=success
CleanResult=success
UID=[not set]
GID=[not set]
NRestarts=0
OOMPolicy=stop
ExecMainStartTimestamp=Mon 2021-03-15 15:25:50 UTC
ExecMainStartTimestampMonotonic=44618996
ExecMainExitTimestampMonotonic=0
ExecMainPID=1835
ExecMainCode=0
ExecMainStatus=0
Slice=system.slice
ControlGroup=/system.slice/crio.service
MemoryCurrent=5699645440
CPUUsageNSec=19470893215951
EffectiveCPUs=
EffectiveMemoryNodes=
TasksCurrent=91
IPIngressBytes=[no data]
IPIngressPackets=[no data]
IPEgressBytes=[no data]
IPEgressPackets=[no data]
IOReadBytes=18446744073709551615
IOReadOperations=18446744073709551615
IOWriteBytes=18446744073709551615
IOWriteOperations=18446744073709551615
Delegate=no
CPUAccounting=yes
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
CPUQuotaPeriodUSec=infinity
AllowedCPUs=
AllowedMemoryNodes=
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=yes
DefaultMemoryLow=0
DefaultMemoryMin=0
MemoryMin=0
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
EnvironmentFiles=/etc/crio/crio.env (ignore_errors=no)
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=1048576
LimitNPROCSoft=1048576
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=515129
LimitSIGPENDINGSoft=515129
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
RootHashSignature=
OOMScoreAdjust=-999
CoredumpFilter=0x33
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
CPUAffinity=
CPUAffinityFromNUMA=no
NUMAPolicy=n/a
NUMAMask=
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
LogRateLimitIntervalUSec=0
LogRateLimitBurst=0
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectClock=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectKernelLogs=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
PrivateMounts=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
TimeoutCleanUSec=infinity
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictSUIDSGID=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
ProtectHostname=no
KillMode=control-group
KillSignal=15
RestartKillSignal=15
FinalKillSignal=9
SendSIGKILL=yes
SendSIGHUP=no
WatchdogSignal=6
Id=crio.service
Names=crio.service
Requires=network-online.target sysinit.target system.slice lvm2-lvmetad.service cri-logging-driver-watch.service
RequiredBy=kubelet.service
WantedBy=crio-shutdown.service multi-user.target
Conflicts=shutdown.target
Before=multi-user.target kubelet.service nvidia-driver.service shutdown.target crio-shutdown.service
After=network-online.target systemd-journald.socket sysinit.target basic.target cri-logging-driver-watch.service system.slice lvm2-lvmetad.service
Documentation=https://github.com/kubernetes-sigs/cri-o/blob/master/contrib/systemd/crio.service
Description=Open Container Initiative Daemon
LoadState=loaded
ActiveState=active
FreezerState=running
SubState=running
FragmentPath=/etc/systemd/system/crio.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Mon 2021-03-15 15:25:50 UTC
StateChangeTimestampMonotonic=44870579
InactiveExitTimestamp=Mon 2021-03-15 15:25:39 UTC
InactiveExitTimestampMonotonic=33025689
ActiveEnterTimestamp=Mon 2021-03-15 15:25:50 UTC
ActiveEnterTimestampMonotonic=44870579
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
CanFreeze=yes
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Mon 2021-03-15 15:25:39 UTC
ConditionTimestampMonotonic=33023905
AssertTimestamp=Mon 2021-03-15 15:25:39 UTC
AssertTimestampMonotonic=33023907
Transient=no
Perpetual=no
StartLimitIntervalUSec=10s
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=f809555a043b401888efb40c4e4c6dbc
CollectMode=inactive
```
Output of "`cat /etc/crio/crio.conf`":
```
# The CRI-O configuration file specifies all of the available configuration
# options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
# daemon, but in a TOML format that can be more easily modified and versioned.
#
# Please refer to crio.conf(5) for details of all configuration options.
# CRI-O supports partial configuration reload during runtime, which can be
# done by sending SIGHUP to the running process. Currently supported options
# are explicitly mentioned with: 'This option supports live configuration
# reload'.
# CRI-O reads its storage defaults from the containers-storage.conf(5) file
# located at /etc/containers/storage.conf. Modify this storage configuration if
# you want to change the system's defaults. If you want to modify storage just
# for CRI-O, you can change the storage configuration options here.
[crio]
# Path to the "root directory". CRI-O stores all of its data, including
# containers images, in this directory.
#root = "/home/sascha/.local/share/containers/storage"
# Path to the "run directory". CRI-O stores all of its state in this directory.
#runroot = "/tmp/1000"
# Storage driver used to manage the storage of images and containers. Please
# refer to containers-storage.conf(5) to see all available storage drivers.
#storage_driver = "vfs"
# List to pass options to the storage driver. Please refer to
# containers-storage.conf(5) to see all available storage options.
#storage_option = [
#]
# If set to false, in-memory locking will be used instead of file-based locking.
# **Deprecated** this option will be removed in the future.
file_locking = false
# Path to the lock file.
# **Deprecated** this option will be removed in the future.
file_locking_path = "/run/crio.lock"
# The crio.api table contains settings for the kubelet/gRPC interface.
[crio.api]
# Path to AF_LOCAL socket on which CRI-O will listen.
listen = "/var/run/crio/crio.sock"
# IP address on which the stream server will listen.
stream_address = "127.0.0.1"
# The port on which the stream server will listen.
stream_port = "0"
# Enable encrypted TLS transport of the stream server.
stream_enable_tls = false
# Path to the x509 certificate file used to serve the encrypted stream. This
# file can change, and CRI-O will automatically pick up the changes within 5
# minutes.
stream_tls_cert = ""
# Path to the key file used to serve the encrypted stream. This file can
# change, and CRI-O will automatically pick up the changes within 5 minutes.
stream_tls_key = ""
# Path to the x509 CA(s) file used to verify and authenticate client
# communication with the encrypted stream. This file can change, and CRI-O will
# automatically pick up the changes within 5 minutes.
stream_tls_ca = ""
# Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
grpc_max_send_msg_size = 16777216
# Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
grpc_max_recv_msg_size = 16777216
# The crio.runtime table contains settings pertaining to the OCI runtime used
# and options for how to set up and manage the OCI runtime.
[crio.runtime]
# A list of ulimits to be set in containers by default, specified as
# "=:", for example:
# "nofile=1024:2048"
# If nothing is set here, settings will be inherited from the CRI-O daemon
#default_ulimits = [
#]
# default_runtime is the _name_ of the OCI runtime to be used as the default.
# The name is matched against the runtimes map below.
default_runtime = "runc"
# If true, the runtime will not use pivot_root, but instead use MS_MOVE.
no_pivot = false
# Path to the conmon binary, used for monitoring the OCI runtime.
# Ethos: The default value of `/usr/local/libexec/crio/conmon` is on the read-only
# filesystem. This binary is provided at the new value, `/opt/bin/conmon`
conmon = "/opt/bin/conmon"
# Cgroup setting for conmon
conmon_cgroup = "pod"
# Environment variable list for the conmon process, used for passing necessary
# environment variables to conmon or the runtime.
conmon_env = [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
]
# If true, SELinux will be used for pod separation on the host.
# Ethos: selinux must be disabled for kata to currently function.
selinux = false
# Path to the seccomp.json profile which is used as the default seccomp profile
# for the runtime. If not specified, then the internal default seccomp profile
# will be used.
seccomp_profile = "/etc/crio/seccomp.json"
# Used to change the name of the default AppArmor profile of CRI-O. The default
# profile name is "crio-default-" followed by the version string of CRI-O.
apparmor_profile = "crio-default"
# Cgroup management implementation used for the runtime.
cgroup_manager = "cgroupfs"
# List of default capabilities for containers. If it is empty or commented out,
# only the capabilities defined in the containers json file by the user/kube
# will be added.
# default_capabilities = [
# "CHOWN",
# "DAC_OVERRIDE",
# "FSETID",
# "FOWNER",
# "NET_RAW",
# "SETGID",
# "SETUID",
# "SETPCAP",
# "NET_BIND_SERVICE",
# "SYS_CHROOT",
# "KILL",
# ]
# List of default sysctls. If it is empty or commented out, only the sysctls
# defined in the container json file by the user/kube will be added.
default_sysctls = [
]
# List of additional devices. specified as
# "::", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
#If it is empty or commented out, only the devices
# defined in the container json file by the user/kube will be added.
additional_devices = [
]
# Path to OCI hooks directories for automatically executed hooks.
hooks_dir = [
"/etc/containers/oci/hooks.d"
]
# List of default mounts for each container. **Deprecated:** this option will
# be removed in future versions in favor of default_mounts_file.
default_mounts = [
]
# Path to the file specifying the defaults mounts for each container. The
# format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
# its default mounts from the following two files:
#
# 1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
# override file, where users can either add in their own default mounts, or
# override the default mounts shipped with the package.
#
# 2) /usr/share/containers/mounts.conf: This is the default file read for
# mounts. If you want CRI-O to read from a different, specific mounts file,
# you can change the default_mounts_file. Note, if this is done, CRI-O will
# only add mounts it finds in this file.
#
#default_mounts_file = ""
# Maximum number of processes allowed in a container.
# Changed from a default of 1024, which was too low for compute heavy workloads.
pids_limit = 32768
# Maximum sized allowed for the container log file. Negative numbers indicate
# that no size limit is imposed. If it is positive, it must be >= 8192 to
# match/exceed conmon's read buffer. The file is truncated and re-opened so the
# limit is never exceeded.
log_size_max = -1
# Whether container output should be logged to journald in addition to the kuberentes log file
log_to_journald = false
# Path to directory in which container exit files are written to by conmon.
container_exits_dir = "/var/run/crio/exits"
# Path to directory for container attach sockets.
container_attach_socket_dir = "/var/run/crio"
# If set to true, all containers will run in read-only mode.
read_only = false
# Changes the verbosity of the logs based on the level it is set to. Options
# are fatal, panic, error, warn, info, and debug. This option supports live
# configuration reload.
log_level = "error"
# The default log directory where all logs will go unless directly specified by the kubelet
log_dir = "/var/log/crio/pods"
# The UID mappings for the user namespace of each container. A range is
# specified in the form containerUID:HostUID:Size. Multiple ranges must be
# separated by comma.
uid_mappings = ""
# The GID mappings for the user namespace of each container. A range is
# specified in the form containerGID:HostGID:Size. Multiple ranges must be
# separated by comma.
gid_mappings = ""
# The minimal amount of time in seconds to wait before issuing a timeout
# regarding the proper termination of the container.
ctr_stop_timeout = 0
# The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
# The runtime to use is picked based on the runtime_handler provided by the CRI.
# If no runtime_handler is provided, the runtime will be picked based on the level
# of trust of the workload.
# ManageNetworkNSLifecycle determines whether we pin and remove network namespace
# and manage its lifecycle.
# Ethos: `manage_network_ns_lifecycle` is added according to Kata docs
# https://github.com/kata-containers/packaging/blob/master/kata-deploy/scripts/kata-deploy.sh#L53-L72
# ManageNetworkNSLifecycle determines whether we pin and remove network namespace
# and manage its lifecycle
manage_network_ns_lifecycle = true
[crio.runtime.runtimes.runc]
runtime_path = "/usr/bin/runc"
runtime_type = "oci"
# Ethos: runtime_type = "vm" is designed to support the shimv2 API designed by Kata.
# More information on that API is available here:
# https://github.com/kata-containers/runtime/issues/485
[crio.runtime.runtimes.kata-qemu]
runtime_path = "/opt/kata/bin/containerd-shim-kata-v2"
runtime_type = "vm"
# The crio.image table contains settings pertaining to the management of OCI images.
#
# CRI-O reads its configured registries defaults from the system wide
# containers-registries.conf(5) located in /etc/containers/registries.conf. If
# you want to modify just CRI-O, you can change the registries configuration in
# this file. Otherwise, leave insecure_registries and registries commented out to
# use the system's defaults from /etc/containers/registries.conf.
[crio.image]
# Default transport for pulling images from a remote container storage.
default_transport = "docker://"
# The path to a file containing credentials necessary for pulling images from
# secure registries. The file is similar to that of /var/lib/kubelet/config.json
global_auth_file = ""
# The image used to instantiate infra containers.
# This option supports live configuration reload.
pause_image = "k8s.gcr.io/pause:3.2"
# The path to a file containing credentials specific for pulling the pause_image from
# above. The file is similar to that of /var/lib/kubelet/config.json
# This option supports live configuration reload.
pause_image_auth_file = ""
# The command to run to have a container stay in the paused state.
# This option supports live configuration reload.
pause_command = "/pause"
# Path to the file which decides what sort of policy we use when deciding
# whether or not to trust an image that we've pulled. It is not recommended that
# this option be used, as the default behavior of using the system-wide default
# policy (i.e., /etc/containers/policy.json) is most often preferred. Please
# refer to containers-policy.json(5) for more details.
signature_policy = ""
# Controls how image volumes are handled. The valid values are mkdir, bind and
# ignore; the latter will ignore volumes entirely.
image_volumes = "mkdir"
# List of registries to be used when pulling an unqualified image (e.g.,
# "alpine:latest"). By default, registries is set to "docker.io" for
# compatibility reasons. Depending on your workload and usecase you may add more
# registries (e.g., "quay.io", "registry.fedoraproject.org",
# "registry.opensuse.org", etc.).
registries = [
"docker.io"
]
# The crio.network table containers settings pertaining to the management of
# CNI plugins.
[crio.network]
# Path to the directory where CNI configuration files are located.
network_dir = "/etc/cni/net.d/"
# Paths to directories where CNI plugin binaries are located.
plugin_dirs = [
"/opt/cni/bin/",
]
```
Have `containerd`
Eliding `containerd` output, since only CRI-O is used.
---
# Packages
No `dpkg`
No `rpm`
---
`ulimit -a` output in a runc container:
```
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
memory(kbytes) unlimited
locked memory(kbytes) 64
process 1048576
nofiles 1048576
vmemory(kbytes) unlimited
locks unlimited
rtprio 0
```
`ulimit -a` output in a Kata container:
```
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
memory(kbytes) unlimited
locked memory(kbytes) 64
process 7947
nofiles 1073741816
vmemory(kbytes) unlimited
locks unlimited
rtprio 0
```
I've removed `net.*` from the `sysctl -a` output for the sake of brevity.
Abridged `sysctl -a` output in a runc container:
```
abi.vsyscall32 = 1
debug.exception-trace = 1
debug.kprobes-optimization = 1
dev.hpet.max-user-freq = 64
dev.scsi.logging_level = 0
dev.tty.ldisc_autoload = 1
fs.aio-max-nr = 65536
fs.aio-nr = 96
fs.dentry-state = 400690 366018 45 0 150960 0
fs.dir-notify-enable = 1
fs.epoll.max_user_watches = 27007590
fs.file-max = 9223372036854775807
fs.file-nr = 7616 0 9223372036854775807
fs.inode-nr = 247523 1
fs.inode-state = 247523 1 0 0 0 0 0
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 128000
fs.lease-break-time = 45
fs.leases-enable = 1
fs.mount-max = 100000
fs.mqueue.msg_default = 10
fs.mqueue.msg_max = 10
fs.mqueue.msgsize_default = 8192
fs.mqueue.msgsize_max = 8192
fs.mqueue.queues_max = 256
fs.nr_open = 1073741816
fs.overflowgid = 65534
fs.overflowuid = 65534
fs.pipe-max-size = 1048576
fs.pipe-user-pages-hard = 0
fs.pipe-user-pages-soft = 16384
fs.protected_fifos = 0
fs.protected_hardlinks = 1
fs.protected_regular = 0
fs.protected_symlinks = 1
fs.quota.allocated_dquots = 0
fs.quota.cache_hits = 0
fs.quota.drops = 0
fs.quota.free_dquots = 0
fs.quota.lookups = 0
fs.quota.reads = 0
fs.quota.syncs = 2792
fs.quota.writes = 0
fs.suid_dumpable = 2
kernel.acct = 4 2 30
kernel.acpi_video_flags = 0
kernel.auto_msgmni = 0
kernel.bootloader_type = 114
kernel.bootloader_version = 2
kernel.bpf_stats_enabled = 0
kernel.cad_pid = 0
kernel.cap_last_cap = 37
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
kernel.core_pipe_limit = 0
kernel.core_uses_pid = 1
kernel.ctrl-alt-del = 0
kernel.dmesg_restrict = 1
kernel.domainname = (none)
kernel.firmware_config.force_sysfs_fallback = 0
kernel.firmware_config.ignore_sysfs_fallback = 0
kernel.ftrace_dump_on_oops = 0
kernel.ftrace_enabled = 1
kernel.hardlockup_all_cpu_backtrace = 0
kernel.hardlockup_panic = 1
kernel.hostname = thread-creation-teardown-test-runc
kernel.hung_task_check_count = 4194304
kernel.hung_task_check_interval_secs = 0
kernel.hung_task_panic = 0
kernel.hung_task_timeout_secs = 120
kernel.hung_task_warnings = 10
kernel.hyperv_record_panic_msg = 1
kernel.io_delay_type = 0
kernel.kexec_load_disabled = 0
kernel.keys.gc_delay = 300
kernel.keys.maxbytes = 20000
kernel.keys.maxkeys = 200
kernel.keys.root_maxbytes = 25000000
kernel.keys.root_maxkeys = 1000000
kernel.kptr_restrict = 1
kernel.latencytop = 0
kernel.max_lock_depth = 1024
kernel.modprobe = /sbin/modprobe
kernel.modules_disabled = 0
kernel.msg_next_id = -1
kernel.msgmax = 8192
kernel.msgmnb = 16384
kernel.msgmni = 32000
kernel.ngroups_max = 65536
kernel.nmi_watchdog = 0
kernel.ns_last_pid = 32
kernel.numa_balancing = 0
kernel.numa_balancing_scan_delay_ms = 1000
kernel.numa_balancing_scan_period_max_ms = 60000
kernel.numa_balancing_scan_period_min_ms = 1000
kernel.numa_balancing_scan_size_mb = 256
kernel.osrelease = 5.4.92-flatcar
kernel.ostype = Linux
kernel.overflowgid = 65534
kernel.overflowuid = 65534
kernel.panic = 10
kernel.panic_on_io_nmi = 0
kernel.panic_on_oops = 1
kernel.panic_on_rcu_stall = 0
kernel.panic_on_unrecovered_nmi = 0
kernel.panic_on_warn = 0
kernel.panic_print = 0
kernel.perf_cpu_time_max_percent = 25
kernel.perf_event_max_contexts_per_stack = 8
kernel.perf_event_max_sample_rate = 100000
kernel.perf_event_max_stack = 127
kernel.perf_event_mlock_kb = 516
kernel.perf_event_paranoid = 2
kernel.pid_max = 4194304
kernel.poweroff_cmd = /sbin/poweroff
kernel.print-fatal-signals = 0
kernel.printk = 7 4 1 7
kernel.printk_delay = 0
kernel.printk_devkmsg = on
kernel.printk_ratelimit = 5
kernel.printk_ratelimit_burst = 10
kernel.pty.max = 4096
kernel.pty.nr = 0
kernel.pty.reserve = 1024
kernel.random.boot_id = d9c1486f-3058-4102-9f45-aff17c0d527e
kernel.random.entropy_avail = 3928
kernel.random.poolsize = 4096
kernel.random.read_wakeup_threshold = 64
kernel.random.urandom_min_reseed_secs = 60
kernel.random.uuid = a394fa62-3704-4ff7-99e6-9e2e3c69e881
kernel.random.write_wakeup_threshold = 3072
kernel.randomize_va_space = 2
kernel.real-root-dev = 0
kernel.sched_autogroup_enabled = 1
kernel.sched_cfs_bandwidth_slice_us = 5000
kernel.sched_child_runs_first = 0
kernel.sched_domain.cpu0.domain0.busy_factor = 32
kernel.sched_domain.cpu0.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu0.domain0.flags = 4783
kernel.sched_domain.cpu0.domain0.imbalance_pct = 110
kernel.sched_domain.cpu0.domain0.max_interval = 4
kernel.sched_domain.cpu0.domain0.max_newidle_lb_cost = 810525
kernel.sched_domain.cpu0.domain0.min_interval = 2
kernel.sched_domain.cpu0.domain0.name = SMT
kernel.sched_domain.cpu0.domain1.busy_factor = 32
kernel.sched_domain.cpu0.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu0.domain1.flags = 4655
kernel.sched_domain.cpu0.domain1.imbalance_pct = 117
kernel.sched_domain.cpu0.domain1.max_interval = 64
kernel.sched_domain.cpu0.domain1.max_newidle_lb_cost = 1420630
kernel.sched_domain.cpu0.domain1.min_interval = 32
kernel.sched_domain.cpu0.domain1.name = MC
kernel.sched_domain.cpu1.domain0.busy_factor = 32
kernel.sched_domain.cpu1.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu1.domain0.flags = 4783
kernel.sched_domain.cpu1.domain0.imbalance_pct = 110
kernel.sched_domain.cpu1.domain0.max_interval = 4
kernel.sched_domain.cpu1.domain0.max_newidle_lb_cost = 424325
kernel.sched_domain.cpu1.domain0.min_interval = 2
kernel.sched_domain.cpu1.domain0.name = SMT
kernel.sched_domain.cpu1.domain1.busy_factor = 32
kernel.sched_domain.cpu1.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu1.domain1.flags = 4655
kernel.sched_domain.cpu1.domain1.imbalance_pct = 117
kernel.sched_domain.cpu1.domain1.max_interval = 64
kernel.sched_domain.cpu1.domain1.max_newidle_lb_cost = 1209866
kernel.sched_domain.cpu1.domain1.min_interval = 32
kernel.sched_domain.cpu1.domain1.name = MC
kernel.sched_domain.cpu10.domain0.busy_factor = 32
kernel.sched_domain.cpu10.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu10.domain0.flags = 4783
kernel.sched_domain.cpu10.domain0.imbalance_pct = 110
kernel.sched_domain.cpu10.domain0.max_interval = 4
kernel.sched_domain.cpu10.domain0.max_newidle_lb_cost = 28656
kernel.sched_domain.cpu10.domain0.min_interval = 2
kernel.sched_domain.cpu10.domain0.name = SMT
kernel.sched_domain.cpu10.domain1.busy_factor = 32
kernel.sched_domain.cpu10.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu10.domain1.flags = 4655
kernel.sched_domain.cpu10.domain1.imbalance_pct = 117
kernel.sched_domain.cpu10.domain1.max_interval = 64
kernel.sched_domain.cpu10.domain1.max_newidle_lb_cost = 1161003
kernel.sched_domain.cpu10.domain1.min_interval = 32
kernel.sched_domain.cpu10.domain1.name = MC
kernel.sched_domain.cpu11.domain0.busy_factor = 32
kernel.sched_domain.cpu11.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu11.domain0.flags = 4783
kernel.sched_domain.cpu11.domain0.imbalance_pct = 110
kernel.sched_domain.cpu11.domain0.max_interval = 4
kernel.sched_domain.cpu11.domain0.max_newidle_lb_cost = 46900
kernel.sched_domain.cpu11.domain0.min_interval = 2
kernel.sched_domain.cpu11.domain0.name = SMT
kernel.sched_domain.cpu11.domain1.busy_factor = 32
kernel.sched_domain.cpu11.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu11.domain1.flags = 4655
kernel.sched_domain.cpu11.domain1.imbalance_pct = 117
kernel.sched_domain.cpu11.domain1.max_interval = 64
kernel.sched_domain.cpu11.domain1.max_newidle_lb_cost = 1164333
kernel.sched_domain.cpu11.domain1.min_interval = 32
kernel.sched_domain.cpu11.domain1.name = MC
kernel.sched_domain.cpu12.domain0.busy_factor = 32
kernel.sched_domain.cpu12.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu12.domain0.flags = 4783
kernel.sched_domain.cpu12.domain0.imbalance_pct = 110
kernel.sched_domain.cpu12.domain0.max_interval = 4
kernel.sched_domain.cpu12.domain0.max_newidle_lb_cost = 363278
kernel.sched_domain.cpu12.domain0.min_interval = 2
kernel.sched_domain.cpu12.domain0.name = SMT
kernel.sched_domain.cpu12.domain1.busy_factor = 32
kernel.sched_domain.cpu12.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu12.domain1.flags = 4655
kernel.sched_domain.cpu12.domain1.imbalance_pct = 117
kernel.sched_domain.cpu12.domain1.max_interval = 64
kernel.sched_domain.cpu12.domain1.max_newidle_lb_cost = 3187221
kernel.sched_domain.cpu12.domain1.min_interval = 32
kernel.sched_domain.cpu12.domain1.name = MC
kernel.sched_domain.cpu13.domain0.busy_factor = 32
kernel.sched_domain.cpu13.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu13.domain0.flags = 4783
kernel.sched_domain.cpu13.domain0.imbalance_pct = 110
kernel.sched_domain.cpu13.domain0.max_interval = 4
kernel.sched_domain.cpu13.domain0.max_newidle_lb_cost = 189693
kernel.sched_domain.cpu13.domain0.min_interval = 2
kernel.sched_domain.cpu13.domain0.name = SMT
kernel.sched_domain.cpu13.domain1.busy_factor = 32
kernel.sched_domain.cpu13.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu13.domain1.flags = 4655
kernel.sched_domain.cpu13.domain1.imbalance_pct = 117
kernel.sched_domain.cpu13.domain1.max_interval = 64
kernel.sched_domain.cpu13.domain1.max_newidle_lb_cost = 3057571
kernel.sched_domain.cpu13.domain1.min_interval = 32
kernel.sched_domain.cpu13.domain1.name = MC
kernel.sched_domain.cpu14.domain0.busy_factor = 32
kernel.sched_domain.cpu14.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu14.domain0.flags = 4783
kernel.sched_domain.cpu14.domain0.imbalance_pct = 110
kernel.sched_domain.cpu14.domain0.max_interval = 4
kernel.sched_domain.cpu14.domain0.max_newidle_lb_cost = 82026
kernel.sched_domain.cpu14.domain0.min_interval = 2
kernel.sched_domain.cpu14.domain0.name = SMT
kernel.sched_domain.cpu14.domain1.busy_factor = 32
kernel.sched_domain.cpu14.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu14.domain1.flags = 4655
kernel.sched_domain.cpu14.domain1.imbalance_pct = 117
kernel.sched_domain.cpu14.domain1.max_interval = 64
kernel.sched_domain.cpu14.domain1.max_newidle_lb_cost = 3576524
kernel.sched_domain.cpu14.domain1.min_interval = 32
kernel.sched_domain.cpu14.domain1.name = MC
kernel.sched_domain.cpu15.domain0.busy_factor = 32
kernel.sched_domain.cpu15.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu15.domain0.flags = 4783
kernel.sched_domain.cpu15.domain0.imbalance_pct = 110
kernel.sched_domain.cpu15.domain0.max_interval = 4
kernel.sched_domain.cpu15.domain0.max_newidle_lb_cost = 562416
kernel.sched_domain.cpu15.domain0.min_interval = 2
kernel.sched_domain.cpu15.domain0.name = SMT
kernel.sched_domain.cpu15.domain1.busy_factor = 32
kernel.sched_domain.cpu15.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu15.domain1.flags = 4655
kernel.sched_domain.cpu15.domain1.imbalance_pct = 117
kernel.sched_domain.cpu15.domain1.max_interval = 64
kernel.sched_domain.cpu15.domain1.max_newidle_lb_cost = 1164780
kernel.sched_domain.cpu15.domain1.min_interval = 32
kernel.sched_domain.cpu15.domain1.name = MC
kernel.sched_domain.cpu16.domain0.busy_factor = 32
kernel.sched_domain.cpu16.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu16.domain0.flags = 4783
kernel.sched_domain.cpu16.domain0.imbalance_pct = 110
kernel.sched_domain.cpu16.domain0.max_interval = 4
kernel.sched_domain.cpu16.domain0.max_newidle_lb_cost = 41205
kernel.sched_domain.cpu16.domain0.min_interval = 2
kernel.sched_domain.cpu16.domain0.name = SMT
kernel.sched_domain.cpu16.domain1.busy_factor = 32
kernel.sched_domain.cpu16.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu16.domain1.flags = 4655
kernel.sched_domain.cpu16.domain1.imbalance_pct = 117
kernel.sched_domain.cpu16.domain1.max_interval = 64
kernel.sched_domain.cpu16.domain1.max_newidle_lb_cost = 1300425
kernel.sched_domain.cpu16.domain1.min_interval = 32
kernel.sched_domain.cpu16.domain1.name = MC
kernel.sched_domain.cpu17.domain0.busy_factor = 32
kernel.sched_domain.cpu17.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu17.domain0.flags = 4783
kernel.sched_domain.cpu17.domain0.imbalance_pct = 110
kernel.sched_domain.cpu17.domain0.max_interval = 4
kernel.sched_domain.cpu17.domain0.max_newidle_lb_cost = 22749
kernel.sched_domain.cpu17.domain0.min_interval = 2
kernel.sched_domain.cpu17.domain0.name = SMT
kernel.sched_domain.cpu17.domain1.busy_factor = 32
kernel.sched_domain.cpu17.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu17.domain1.flags = 4655
kernel.sched_domain.cpu17.domain1.imbalance_pct = 117
kernel.sched_domain.cpu17.domain1.max_interval = 64
kernel.sched_domain.cpu17.domain1.max_newidle_lb_cost = 3331621
kernel.sched_domain.cpu17.domain1.min_interval = 32
kernel.sched_domain.cpu17.domain1.name = MC
kernel.sched_domain.cpu18.domain0.busy_factor = 32
kernel.sched_domain.cpu18.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu18.domain0.flags = 4783
kernel.sched_domain.cpu18.domain0.imbalance_pct = 110
kernel.sched_domain.cpu18.domain0.max_interval = 4
kernel.sched_domain.cpu18.domain0.max_newidle_lb_cost = 75224
kernel.sched_domain.cpu18.domain0.min_interval = 2
kernel.sched_domain.cpu18.domain0.name = SMT
kernel.sched_domain.cpu18.domain1.busy_factor = 32
kernel.sched_domain.cpu18.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu18.domain1.flags = 4655
kernel.sched_domain.cpu18.domain1.imbalance_pct = 117
kernel.sched_domain.cpu18.domain1.max_interval = 64
kernel.sched_domain.cpu18.domain1.max_newidle_lb_cost = 2507134
kernel.sched_domain.cpu18.domain1.min_interval = 32
kernel.sched_domain.cpu18.domain1.name = MC
kernel.sched_domain.cpu19.domain0.busy_factor = 32
kernel.sched_domain.cpu19.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu19.domain0.flags = 4783
kernel.sched_domain.cpu19.domain0.imbalance_pct = 110
kernel.sched_domain.cpu19.domain0.max_interval = 4
kernel.sched_domain.cpu19.domain0.max_newidle_lb_cost = 639722
kernel.sched_domain.cpu19.domain0.min_interval = 2
kernel.sched_domain.cpu19.domain0.name = SMT
kernel.sched_domain.cpu19.domain1.busy_factor = 32
kernel.sched_domain.cpu19.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu19.domain1.flags = 4655
kernel.sched_domain.cpu19.domain1.imbalance_pct = 117
kernel.sched_domain.cpu19.domain1.max_interval = 64
kernel.sched_domain.cpu19.domain1.max_newidle_lb_cost = 3582545
kernel.sched_domain.cpu19.domain1.min_interval = 32
kernel.sched_domain.cpu19.domain1.name = MC
kernel.sched_domain.cpu2.domain0.busy_factor = 32
kernel.sched_domain.cpu2.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu2.domain0.flags = 4783
kernel.sched_domain.cpu2.domain0.imbalance_pct = 110
kernel.sched_domain.cpu2.domain0.max_interval = 4
kernel.sched_domain.cpu2.domain0.max_newidle_lb_cost = 40827
kernel.sched_domain.cpu2.domain0.min_interval = 2
kernel.sched_domain.cpu2.domain0.name = SMT
kernel.sched_domain.cpu2.domain1.busy_factor = 32
kernel.sched_domain.cpu2.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu2.domain1.flags = 4655
kernel.sched_domain.cpu2.domain1.imbalance_pct = 117
kernel.sched_domain.cpu2.domain1.max_interval = 64
kernel.sched_domain.cpu2.domain1.max_newidle_lb_cost = 1936836
kernel.sched_domain.cpu2.domain1.min_interval = 32
kernel.sched_domain.cpu2.domain1.name = MC
kernel.sched_domain.cpu20.domain0.busy_factor = 32
kernel.sched_domain.cpu20.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu20.domain0.flags = 4783
kernel.sched_domain.cpu20.domain0.imbalance_pct = 110
kernel.sched_domain.cpu20.domain0.max_interval = 4
kernel.sched_domain.cpu20.domain0.max_newidle_lb_cost = 1407664
kernel.sched_domain.cpu20.domain0.min_interval = 2
kernel.sched_domain.cpu20.domain0.name = SMT
kernel.sched_domain.cpu20.domain1.busy_factor = 32
kernel.sched_domain.cpu20.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu20.domain1.flags = 4655
kernel.sched_domain.cpu20.domain1.imbalance_pct = 117
kernel.sched_domain.cpu20.domain1.max_interval = 64
kernel.sched_domain.cpu20.domain1.max_newidle_lb_cost = 1259411
kernel.sched_domain.cpu20.domain1.min_interval = 32
kernel.sched_domain.cpu20.domain1.name = MC
kernel.sched_domain.cpu21.domain0.busy_factor = 32
kernel.sched_domain.cpu21.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu21.domain0.flags = 4783
kernel.sched_domain.cpu21.domain0.imbalance_pct = 110
kernel.sched_domain.cpu21.domain0.max_interval = 4
kernel.sched_domain.cpu21.domain0.max_newidle_lb_cost = 228269
kernel.sched_domain.cpu21.domain0.min_interval = 2
kernel.sched_domain.cpu21.domain0.name = SMT
kernel.sched_domain.cpu21.domain1.busy_factor = 32
kernel.sched_domain.cpu21.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu21.domain1.flags = 4655
kernel.sched_domain.cpu21.domain1.imbalance_pct = 117
kernel.sched_domain.cpu21.domain1.max_interval = 64
kernel.sched_domain.cpu21.domain1.max_newidle_lb_cost = 1178419
kernel.sched_domain.cpu21.domain1.min_interval = 32
kernel.sched_domain.cpu21.domain1.name = MC
kernel.sched_domain.cpu22.domain0.busy_factor = 32
kernel.sched_domain.cpu22.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu22.domain0.flags = 4783
kernel.sched_domain.cpu22.domain0.imbalance_pct = 110
kernel.sched_domain.cpu22.domain0.max_interval = 4
kernel.sched_domain.cpu22.domain0.max_newidle_lb_cost = 38932
kernel.sched_domain.cpu22.domain0.min_interval = 2
kernel.sched_domain.cpu22.domain0.name = SMT
kernel.sched_domain.cpu22.domain1.busy_factor = 32
kernel.sched_domain.cpu22.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu22.domain1.flags = 4655
kernel.sched_domain.cpu22.domain1.imbalance_pct = 117
kernel.sched_domain.cpu22.domain1.max_interval = 64
kernel.sched_domain.cpu22.domain1.max_newidle_lb_cost = 3532426
kernel.sched_domain.cpu22.domain1.min_interval = 32
kernel.sched_domain.cpu22.domain1.name = MC
kernel.sched_domain.cpu23.domain0.busy_factor = 32
kernel.sched_domain.cpu23.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu23.domain0.flags = 4783
kernel.sched_domain.cpu23.domain0.imbalance_pct = 110
kernel.sched_domain.cpu23.domain0.max_interval = 4
kernel.sched_domain.cpu23.domain0.max_newidle_lb_cost = 1633777
kernel.sched_domain.cpu23.domain0.min_interval = 2
kernel.sched_domain.cpu23.domain0.name = SMT
kernel.sched_domain.cpu23.domain1.busy_factor = 32
kernel.sched_domain.cpu23.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu23.domain1.flags = 4655
kernel.sched_domain.cpu23.domain1.imbalance_pct = 117
kernel.sched_domain.cpu23.domain1.max_interval = 64
kernel.sched_domain.cpu23.domain1.max_newidle_lb_cost = 3077655
kernel.sched_domain.cpu23.domain1.min_interval = 32
kernel.sched_domain.cpu23.domain1.name = MC
kernel.sched_domain.cpu24.domain0.busy_factor = 32
kernel.sched_domain.cpu24.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu24.domain0.flags = 4783
kernel.sched_domain.cpu24.domain0.imbalance_pct = 110
kernel.sched_domain.cpu24.domain0.max_interval = 4
kernel.sched_domain.cpu24.domain0.max_newidle_lb_cost = 243766
kernel.sched_domain.cpu24.domain0.min_interval = 2
kernel.sched_domain.cpu24.domain0.name = SMT
kernel.sched_domain.cpu24.domain1.busy_factor = 32
kernel.sched_domain.cpu24.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu24.domain1.flags = 4655
kernel.sched_domain.cpu24.domain1.imbalance_pct = 117
kernel.sched_domain.cpu24.domain1.max_interval = 64
kernel.sched_domain.cpu24.domain1.max_newidle_lb_cost = 1213791
kernel.sched_domain.cpu24.domain1.min_interval = 32
kernel.sched_domain.cpu24.domain1.name = MC
kernel.sched_domain.cpu25.domain0.busy_factor = 32
kernel.sched_domain.cpu25.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu25.domain0.flags = 4783
kernel.sched_domain.cpu25.domain0.imbalance_pct = 110
kernel.sched_domain.cpu25.domain0.max_interval = 4
kernel.sched_domain.cpu25.domain0.max_newidle_lb_cost = 867909
kernel.sched_domain.cpu25.domain0.min_interval = 2
kernel.sched_domain.cpu25.domain0.name = SMT
kernel.sched_domain.cpu25.domain1.busy_factor = 32
kernel.sched_domain.cpu25.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu25.domain1.flags = 4655
kernel.sched_domain.cpu25.domain1.imbalance_pct = 117
kernel.sched_domain.cpu25.domain1.max_interval = 64
kernel.sched_domain.cpu25.domain1.max_newidle_lb_cost = 2251731
kernel.sched_domain.cpu25.domain1.min_interval = 32
kernel.sched_domain.cpu25.domain1.name = MC
kernel.sched_domain.cpu26.domain0.busy_factor = 32
kernel.sched_domain.cpu26.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu26.domain0.flags = 4783
kernel.sched_domain.cpu26.domain0.imbalance_pct = 110
kernel.sched_domain.cpu26.domain0.max_interval = 4
kernel.sched_domain.cpu26.domain0.max_newidle_lb_cost = 77515
kernel.sched_domain.cpu26.domain0.min_interval = 2
kernel.sched_domain.cpu26.domain0.name = SMT
kernel.sched_domain.cpu26.domain1.busy_factor = 32
kernel.sched_domain.cpu26.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu26.domain1.flags = 4655
kernel.sched_domain.cpu26.domain1.imbalance_pct = 117
kernel.sched_domain.cpu26.domain1.max_interval = 64
kernel.sched_domain.cpu26.domain1.max_newidle_lb_cost = 2568230
kernel.sched_domain.cpu26.domain1.min_interval = 32
kernel.sched_domain.cpu26.domain1.name = MC
kernel.sched_domain.cpu27.domain0.busy_factor = 32
kernel.sched_domain.cpu27.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu27.domain0.flags = 4783
kernel.sched_domain.cpu27.domain0.imbalance_pct = 110
kernel.sched_domain.cpu27.domain0.max_interval = 4
kernel.sched_domain.cpu27.domain0.max_newidle_lb_cost = 86016
kernel.sched_domain.cpu27.domain0.min_interval = 2
kernel.sched_domain.cpu27.domain0.name = SMT
kernel.sched_domain.cpu27.domain1.busy_factor = 32
kernel.sched_domain.cpu27.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu27.domain1.flags = 4655
kernel.sched_domain.cpu27.domain1.imbalance_pct = 117
kernel.sched_domain.cpu27.domain1.max_interval = 64
kernel.sched_domain.cpu27.domain1.max_newidle_lb_cost = 1370965
kernel.sched_domain.cpu27.domain1.min_interval = 32
kernel.sched_domain.cpu27.domain1.name = MC
kernel.sched_domain.cpu28.domain0.busy_factor = 32
kernel.sched_domain.cpu28.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu28.domain0.flags = 4783
kernel.sched_domain.cpu28.domain0.imbalance_pct = 110
kernel.sched_domain.cpu28.domain0.max_interval = 4
kernel.sched_domain.cpu28.domain0.max_newidle_lb_cost = 104349
kernel.sched_domain.cpu28.domain0.min_interval = 2
kernel.sched_domain.cpu28.domain0.name = SMT
kernel.sched_domain.cpu28.domain1.busy_factor = 32
kernel.sched_domain.cpu28.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu28.domain1.flags = 4655
kernel.sched_domain.cpu28.domain1.imbalance_pct = 117
kernel.sched_domain.cpu28.domain1.max_interval = 64
kernel.sched_domain.cpu28.domain1.max_newidle_lb_cost = 2565108
kernel.sched_domain.cpu28.domain1.min_interval = 32
kernel.sched_domain.cpu28.domain1.name = MC
kernel.sched_domain.cpu29.domain0.busy_factor = 32
kernel.sched_domain.cpu29.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu29.domain0.flags = 4783
kernel.sched_domain.cpu29.domain0.imbalance_pct = 110
kernel.sched_domain.cpu29.domain0.max_interval = 4
kernel.sched_domain.cpu29.domain0.max_newidle_lb_cost = 92169
kernel.sched_domain.cpu29.domain0.min_interval = 2
kernel.sched_domain.cpu29.domain0.name = SMT
kernel.sched_domain.cpu29.domain1.busy_factor = 32
kernel.sched_domain.cpu29.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu29.domain1.flags = 4655
kernel.sched_domain.cpu29.domain1.imbalance_pct = 117
kernel.sched_domain.cpu29.domain1.max_interval = 64
kernel.sched_domain.cpu29.domain1.max_newidle_lb_cost = 2689887
kernel.sched_domain.cpu29.domain1.min_interval = 32
kernel.sched_domain.cpu29.domain1.name = MC
kernel.sched_domain.cpu3.domain0.busy_factor = 32
kernel.sched_domain.cpu3.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu3.domain0.flags = 4783
kernel.sched_domain.cpu3.domain0.imbalance_pct = 110
kernel.sched_domain.cpu3.domain0.max_interval = 4
kernel.sched_domain.cpu3.domain0.max_newidle_lb_cost = 133683
kernel.sched_domain.cpu3.domain0.min_interval = 2
kernel.sched_domain.cpu3.domain0.name = SMT
kernel.sched_domain.cpu3.domain1.busy_factor = 32
kernel.sched_domain.cpu3.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu3.domain1.flags = 4655
kernel.sched_domain.cpu3.domain1.imbalance_pct = 117
kernel.sched_domain.cpu3.domain1.max_interval = 64
kernel.sched_domain.cpu3.domain1.max_newidle_lb_cost = 3189715
kernel.sched_domain.cpu3.domain1.min_interval = 32
kernel.sched_domain.cpu3.domain1.name = MC
kernel.sched_domain.cpu30.domain0.busy_factor = 32
kernel.sched_domain.cpu30.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu30.domain0.flags = 4783
kernel.sched_domain.cpu30.domain0.imbalance_pct = 110
kernel.sched_domain.cpu30.domain0.max_interval = 4
kernel.sched_domain.cpu30.domain0.max_newidle_lb_cost = 1033630
kernel.sched_domain.cpu30.domain0.min_interval = 2
kernel.sched_domain.cpu30.domain0.name = SMT
kernel.sched_domain.cpu30.domain1.busy_factor = 32
kernel.sched_domain.cpu30.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu30.domain1.flags = 4655
kernel.sched_domain.cpu30.domain1.imbalance_pct = 117
kernel.sched_domain.cpu30.domain1.max_interval = 64
kernel.sched_domain.cpu30.domain1.max_newidle_lb_cost = 1169757
kernel.sched_domain.cpu30.domain1.min_interval = 32
kernel.sched_domain.cpu30.domain1.name = MC
kernel.sched_domain.cpu31.domain0.busy_factor = 32
kernel.sched_domain.cpu31.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu31.domain0.flags = 4783
kernel.sched_domain.cpu31.domain0.imbalance_pct = 110
kernel.sched_domain.cpu31.domain0.max_interval = 4
kernel.sched_domain.cpu31.domain0.max_newidle_lb_cost = 125462
kernel.sched_domain.cpu31.domain0.min_interval = 2
kernel.sched_domain.cpu31.domain0.name = SMT
kernel.sched_domain.cpu31.domain1.busy_factor = 32
kernel.sched_domain.cpu31.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu31.domain1.flags = 4655
kernel.sched_domain.cpu31.domain1.imbalance_pct = 117
kernel.sched_domain.cpu31.domain1.max_interval = 64
kernel.sched_domain.cpu31.domain1.max_newidle_lb_cost = 2553007
kernel.sched_domain.cpu31.domain1.min_interval = 32
kernel.sched_domain.cpu31.domain1.name = MC
kernel.sched_domain.cpu4.domain0.busy_factor = 32
kernel.sched_domain.cpu4.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu4.domain0.flags = 4783
kernel.sched_domain.cpu4.domain0.imbalance_pct = 110
kernel.sched_domain.cpu4.domain0.max_interval = 4
kernel.sched_domain.cpu4.domain0.max_newidle_lb_cost = 24448
kernel.sched_domain.cpu4.domain0.min_interval = 2
kernel.sched_domain.cpu4.domain0.name = SMT
kernel.sched_domain.cpu4.domain1.busy_factor = 32
kernel.sched_domain.cpu4.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu4.domain1.flags = 4655
kernel.sched_domain.cpu4.domain1.imbalance_pct = 117
kernel.sched_domain.cpu4.domain1.max_interval = 64
kernel.sched_domain.cpu4.domain1.max_newidle_lb_cost = 1178139
kernel.sched_domain.cpu4.domain1.min_interval = 32
kernel.sched_domain.cpu4.domain1.name = MC
kernel.sched_domain.cpu5.domain0.busy_factor = 32
kernel.sched_domain.cpu5.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu5.domain0.flags = 4783
kernel.sched_domain.cpu5.domain0.imbalance_pct = 110
kernel.sched_domain.cpu5.domain0.max_interval = 4
kernel.sched_domain.cpu5.domain0.max_newidle_lb_cost = 197808
kernel.sched_domain.cpu5.domain0.min_interval = 2
kernel.sched_domain.cpu5.domain0.name = SMT
kernel.sched_domain.cpu5.domain1.busy_factor = 32
kernel.sched_domain.cpu5.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu5.domain1.flags = 4655
kernel.sched_domain.cpu5.domain1.imbalance_pct = 117
kernel.sched_domain.cpu5.domain1.max_interval = 64
kernel.sched_domain.cpu5.domain1.max_newidle_lb_cost = 1403675
kernel.sched_domain.cpu5.domain1.min_interval = 32
kernel.sched_domain.cpu5.domain1.name = MC
kernel.sched_domain.cpu6.domain0.busy_factor = 32
kernel.sched_domain.cpu6.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu6.domain0.flags = 4783
kernel.sched_domain.cpu6.domain0.imbalance_pct = 110
kernel.sched_domain.cpu6.domain0.max_interval = 4
kernel.sched_domain.cpu6.domain0.max_newidle_lb_cost = 38016
kernel.sched_domain.cpu6.domain0.min_interval = 2
kernel.sched_domain.cpu6.domain0.name = SMT
kernel.sched_domain.cpu6.domain1.busy_factor = 32
kernel.sched_domain.cpu6.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu6.domain1.flags = 4655
kernel.sched_domain.cpu6.domain1.imbalance_pct = 117
kernel.sched_domain.cpu6.domain1.max_interval = 64
kernel.sched_domain.cpu6.domain1.max_newidle_lb_cost = 1180325
kernel.sched_domain.cpu6.domain1.min_interval = 32
kernel.sched_domain.cpu6.domain1.name = MC
kernel.sched_domain.cpu7.domain0.busy_factor = 32
kernel.sched_domain.cpu7.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu7.domain0.flags = 4783
kernel.sched_domain.cpu7.domain0.imbalance_pct = 110
kernel.sched_domain.cpu7.domain0.max_interval = 4
kernel.sched_domain.cpu7.domain0.max_newidle_lb_cost = 129279
kernel.sched_domain.cpu7.domain0.min_interval = 2
kernel.sched_domain.cpu7.domain0.name = SMT
kernel.sched_domain.cpu7.domain1.busy_factor = 32
kernel.sched_domain.cpu7.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu7.domain1.flags = 4655
kernel.sched_domain.cpu7.domain1.imbalance_pct = 117
kernel.sched_domain.cpu7.domain1.max_interval = 64
kernel.sched_domain.cpu7.domain1.max_newidle_lb_cost = 3525148
kernel.sched_domain.cpu7.domain1.min_interval = 32
kernel.sched_domain.cpu7.domain1.name = MC
kernel.sched_domain.cpu8.domain0.busy_factor = 32
kernel.sched_domain.cpu8.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu8.domain0.flags = 4783
kernel.sched_domain.cpu8.domain0.imbalance_pct = 110
kernel.sched_domain.cpu8.domain0.max_interval = 4
kernel.sched_domain.cpu8.domain0.max_newidle_lb_cost = 103920
kernel.sched_domain.cpu8.domain0.min_interval = 2
kernel.sched_domain.cpu8.domain0.name = SMT
kernel.sched_domain.cpu8.domain1.busy_factor = 32
kernel.sched_domain.cpu8.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu8.domain1.flags = 4655
kernel.sched_domain.cpu8.domain1.imbalance_pct = 117
kernel.sched_domain.cpu8.domain1.max_interval = 64
kernel.sched_domain.cpu8.domain1.max_newidle_lb_cost = 1093607
kernel.sched_domain.cpu8.domain1.min_interval = 32
kernel.sched_domain.cpu8.domain1.name = MC
kernel.sched_domain.cpu9.domain0.busy_factor = 32
kernel.sched_domain.cpu9.domain0.cache_nice_tries = 0
kernel.sched_domain.cpu9.domain0.flags = 4783
kernel.sched_domain.cpu9.domain0.imbalance_pct = 110
kernel.sched_domain.cpu9.domain0.max_interval = 4
kernel.sched_domain.cpu9.domain0.max_newidle_lb_cost = 1063082
kernel.sched_domain.cpu9.domain0.min_interval = 2
kernel.sched_domain.cpu9.domain0.name = SMT
kernel.sched_domain.cpu9.domain1.busy_factor = 32
kernel.sched_domain.cpu9.domain1.cache_nice_tries = 1
kernel.sched_domain.cpu9.domain1.flags = 4655
kernel.sched_domain.cpu9.domain1.imbalance_pct = 117
kernel.sched_domain.cpu9.domain1.max_interval = 64
kernel.sched_domain.cpu9.domain1.max_newidle_lb_cost = 1227805
kernel.sched_domain.cpu9.domain1.min_interval = 32
kernel.sched_domain.cpu9.domain1.name = MC
kernel.sched_latency_ns = 24000000
kernel.sched_migration_cost_ns = 500000
kernel.sched_min_granularity_ns = 3000000
kernel.sched_nr_migrate = 32
kernel.sched_rr_timeslice_ms = 100
kernel.sched_rt_period_us = 1000000
kernel.sched_rt_runtime_us = 950000
kernel.sched_schedstats = 0
kernel.sched_tunable_scaling = 1
kernel.sched_wakeup_granularity_ns = 4000000
kernel.seccomp.actions_avail = kill_process kill_thread trap errno user_notif trace log allow
kernel.seccomp.actions_logged = kill_process kill_thread trap errno user_notif trace log
kernel.sem = 32000 1024000000 500 32000
kernel.sem_next_id = -1
kernel.shm_next_id = -1
kernel.shm_rmid_forced = 0
kernel.shmall = 18446744073692774399
kernel.shmmax = 18446744073692774399
kernel.shmmni = 4096
kernel.soft_watchdog = 1
kernel.softlockup_all_cpu_backtrace = 0
kernel.softlockup_panic = 1
kernel.stack_tracer_enabled = 0
kernel.sysctl_writes_strict = 1
kernel.sysrq = 16
kernel.tainted = 0
kernel.threads-max = 1030258
kernel.timer_migration = 1
kernel.traceoff_on_warning = 0
kernel.tracepoint_printk = 0
kernel.unknown_nmi_panic = 0
kernel.unprivileged_bpf_disabled = 1
kernel.usermodehelper.bset = 4294967295 63
kernel.usermodehelper.inheritable = 4294967295 63
kernel.version = #1 SMP Wed Jan 27 16:53:10 -00 2021
kernel.watchdog = 1
kernel.watchdog_cpumask = 0-31
kernel.watchdog_thresh = 10
user.max_cgroup_namespaces = 515129
user.max_inotify_instances = 8192
user.max_inotify_watches = 128000
user.max_ipc_namespaces = 515129
user.max_mnt_namespaces = 515129
user.max_net_namespaces = 515129
user.max_pid_namespaces = 515129
user.max_user_namespaces = 515129
user.max_uts_namespaces = 515129
vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.compact_unevictable_allowed = 1
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200
vm.extfrag_threshold = 500
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256 256 32 0
vm.max_map_count = 262144
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
vm.min_free_kbytes = 67584
vm.min_slab_ratio = 5
vm.min_unmapped_ratio = 1
vm.mmap_min_addr = 4096
vm.mmap_rnd_bits = 28
vm.mmap_rnd_compat_bits = 8
vm.nr_hugepages = 0
vm.nr_hugepages_mempolicy = 0
vm.nr_overcommit_hugepages = 0
vm.numa_stat = 1
vm.numa_zonelist_order = Node
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 1
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.panic_on_oom = 0
vm.percpu_pagelist_fraction = 0
vm.stat_interval = 1
vm.swappiness = 60
vm.user_reserve_kbytes = 131072
vm.vfs_cache_pressure = 100
vm.watermark_boost_factor = 15000
vm.watermark_scale_factor = 10
vm.zone_reclaim_mode = 0
```
Abridged `sysctl -a` output in a Kata container:
```
crypto.fips_enabled = 0
debug.exception-trace = 1
dev.scsi.logging_level = 0
dev.tty.ldisc_autoload = 0
fs.aio-max-nr = 65536
fs.aio-nr = 0
fs.dentry-state = 2074 1456 45 0 118 0
fs.dir-notify-enable = 1
fs.epoll.max_user_watches = 416665
fs.file-max = 9223372036854775807
fs.file-nr = 128 0 9223372036854775807
fs.inode-nr = 2060 0
fs.inode-state = 2060 0 0 0 0 0 0
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192
fs.lease-break-time = 45
fs.leases-enable = 1
fs.mount-max = 100000
fs.mqueue.msg_default = 10
fs.mqueue.msg_max = 10
fs.mqueue.msgsize_default = 8192
fs.mqueue.msgsize_max = 8192
fs.mqueue.queues_max = 256
fs.nr_open = 1073741816
fs.overflowgid = 65534
fs.overflowuid = 65534
fs.pipe-max-size = 1048576
fs.pipe-user-pages-hard = 0
fs.pipe-user-pages-soft = 16384
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 1
fs.protected_symlinks = 1
fs.suid_dumpable = 2
fs.xfs.error_level = 3
fs.xfs.filestream_centisecs = 3000
fs.xfs.inherit_noatime = 1
fs.xfs.inherit_nodefrag = 1
fs.xfs.inherit_nodump = 1
fs.xfs.inherit_nosymlinks = 0
fs.xfs.inherit_sync = 1
fs.xfs.irix_sgid_inherit = 0
fs.xfs.irix_symlink_mode = 0
fs.xfs.panic_mask = 0
fs.xfs.rotorstep = 1
fs.xfs.speculative_cow_prealloc_lifetime = 1800
fs.xfs.speculative_prealloc_lifetime = 300
fs.xfs.stats_clear = 0
fs.xfs.xfssyncd_centisecs = 3000
kernel.auto_msgmni = 0
kernel.bootloader_type = 176
kernel.bootloader_version = 0
kernel.bpf_stats_enabled = 0
kernel.cad_pid = 0
kernel.cap_last_cap = 37
kernel.core_pattern = |/usr/lib/systemd/coredump-wrapper %E %P %u %g %s %t %c %h %e
kernel.core_pipe_limit = 0
kernel.core_uses_pid = 1
kernel.ctrl-alt-del = 0
kernel.dmesg_restrict = 0
kernel.domainname = (none)
kernel.hostname = thread-creation-teardown-test-kata
kernel.io_delay_type = 0
kernel.kptr_restrict = 0
kernel.max_lock_depth = 1024
kernel.msgmax = 8192
kernel.msgmnb = 16384
kernel.msgmni = 32000
kernel.ngroups_max = 65536
kernel.osrelease = 5.6.0
kernel.ostype = Linux
kernel.overflowgid = 65534
kernel.overflowuid = 65534
kernel.panic = 1
kernel.panic_on_io_nmi = 0
kernel.panic_on_oops = 0
kernel.panic_on_rcu_stall = 0
kernel.panic_on_unrecovered_nmi = 0
kernel.panic_on_warn = 0
kernel.panic_print = 0
kernel.perf_cpu_time_max_percent = 25
kernel.perf_event_max_contexts_per_stack = 8
kernel.perf_event_max_sample_rate = 100000
kernel.perf_event_max_stack = 127
kernel.perf_event_mlock_kb = 516
kernel.perf_event_paranoid = 2
kernel.pid_max = 4194304
kernel.poweroff_cmd = /sbin/poweroff
kernel.print-fatal-signals = 0
kernel.printk = 4 4 1 7
kernel.printk_delay = 0
kernel.printk_devkmsg = on
kernel.printk_ratelimit = 5
kernel.printk_ratelimit_burst = 10
kernel.pty.max = 4096
kernel.pty.nr = 0
kernel.pty.reserve = 1024
kernel.random.boot_id = 514ef3ef-45ae-4ee5-a4a0-fec834aa78f9
kernel.random.entropy_avail = 1332
kernel.random.poolsize = 4096
kernel.random.urandom_min_reseed_secs = 60
kernel.random.uuid = dd253ae3-956a-4ce9-8afe-d43b5a5f82b5
kernel.random.write_wakeup_threshold = 896
kernel.randomize_va_space = 2
kernel.real-root-dev = 0
kernel.sched_cfs_bandwidth_slice_us = 5000
kernel.sched_child_runs_first = 0
kernel.sched_rr_timeslice_ms = 100
kernel.sched_rt_period_us = 1000000
kernel.sched_rt_runtime_us = 950000
kernel.seccomp.actions_avail = kill_process kill_thread trap errno user_notif trace log allow
kernel.seccomp.actions_logged = kill_process kill_thread trap errno user_notif trace log
kernel.sem = 32000 1024000000 500 32000
kernel.shm_rmid_forced = 0
kernel.shmall = 18446744073692774399
kernel.shmmax = 18446744073692774399
kernel.shmmni = 4096
kernel.sysctl_writes_strict = 1
kernel.tainted = 0
kernel.threads-max = 15894
kernel.timer_migration = 1
kernel.unknown_nmi_panic = 0
kernel.unprivileged_bpf_disabled = 0
kernel.usermodehelper.bset = 4294967295 63
kernel.usermodehelper.inheritable = 4294967295 63
kernel.version = #1 SMP Thu Nov 12 07:13:17 UTC 2020
user.max_cgroup_namespaces = 7947
user.max_inotify_instances = 7947
user.max_inotify_watches = 128
user.max_ipc_namespaces = 7947
user.max_mnt_namespaces = 7947
user.max_net_namespaces = 7947
user.max_pid_namespaces = 7947
user.max_user_namespaces = 7947
user.max_uts_namespaces = 7947
vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256 256 32 0 0
vm.max_map_count = 65530
vm.min_free_kbytes = 12895
vm.mmap_min_addr = 4096
vm.mmap_rnd_bits = 28
vm.nr_hugepages = 0
vm.nr_overcommit_hugepages = 0
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.panic_on_oom = 0
vm.percpu_pagelist_fraction = 0
vm.stat_interval = 1
vm.swappiness = 60
vm.user_reserve_kbytes = 131072
vm.vfs_cache_pressure = 100
vm.watermark_boost_factor = 15000
vm.watermark_scale_factor = 10
```
Additionally, here's the raw data from a full test run against an AWS
m5.metal
host (minus thesysctl -a
andulimit -a
output):Show AWS test output
## Thread creation and teardown ### Kata ```json { "tests" : [ { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 1, "setupTotal" : 240, "teardownTotal" : 260, "overallTotal" : 500 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 5, "setupTotal" : 131, "teardownTotal" : 200, "overallTotal" : 331 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 10, "setupTotal" : 121, "teardownTotal" : 200, "overallTotal" : 321 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 1, "setupTotal" : 169, "teardownTotal" : 65, "overallTotal" : 234 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 5, "setupTotal" : 211, "teardownTotal" : 47, "overallTotal" : 258 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 10, "setupTotal" : 195, "teardownTotal" : 72, "overallTotal" : 267 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 1, "setupTotal" : 264, "teardownTotal" : 28, "overallTotal" : 292 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 5, "setupTotal" : 286, "teardownTotal" : 32, "overallTotal" : 318 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 10, "setupTotal" : 247, "teardownTotal" : 57, "overallTotal" : 304 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 1, "setupTotal" : 352, "teardownTotal" : 24, "overallTotal" : 376 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 5, "setupTotal" : 254, "teardownTotal" : 125, "overallTotal" : 379 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 10, "setupTotal" : 282, "teardownTotal" : 111, "overallTotal" : 393 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 1, "setupTotal" : 625, "teardownTotal" : 84, "overallTotal" : 709 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 5, "setupTotal" : 723, "teardownTotal" : 75, "overallTotal" : 798 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 10, "setupTotal" : 785, "teardownTotal" : 46, "overallTotal" : 831 } ] } ``` ### runc ```json { "tests" : [ { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 1, "setupTotal" : 303, "teardownTotal" : 273, "overallTotal" : 576 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 5, "setupTotal" : 233, "teardownTotal" : 201, "overallTotal" : 434 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 10, "setupTotal" : 316, "teardownTotal" : 234, "overallTotal" : 550 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 1, "setupTotal" : 360, "teardownTotal" : 146, "overallTotal" : 506 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 5, "setupTotal" : 436, "teardownTotal" : 158, "overallTotal" : 594 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 10, "setupTotal" : 437, "teardownTotal" : 262, "overallTotal" : 699 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 1, "setupTotal" : 901, "teardownTotal" : 407, "overallTotal" : 1308 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 5, "setupTotal" : 721, "teardownTotal" : 283, "overallTotal" : 1004 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 10, "setupTotal" : 699, "teardownTotal" : 391, "overallTotal" : 1090 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 1, "setupTotal" : 916, "teardownTotal" : 387, "overallTotal" : 1303 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 5, "setupTotal" : 1100, "teardownTotal" : 398, "overallTotal" : 1498 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 10, "setupTotal" : 817, "teardownTotal" : 389, "overallTotal" : 1206 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 1, "setupTotal" : 906, "teardownTotal" : 398, "overallTotal" : 1304 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 5, "setupTotal" : 1298, "teardownTotal" : 496, "overallTotal" : 1794 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 10, "setupTotal" : 1301, "teardownTotal" : 394, "overallTotal" : 1695 } ] } ``` ## Thread yield test ### Kata ```csv threads,total_ms,perthread_ms,total_iterations 1,1.0,1.0,0 2,1.0,0.5,1 4,1.0,0.25,5 8,1.0,0.125,31 16,34.0,2.125,17559 32,234.0,7.3125,122846 64,1735.0,27.109375,881922 128,6174.0,48.234375,3101978 256,18460.0,72.109375,9201969 512,55507.0,108.412109375,27718984 1024,183445.0,179.1455078125,91315102 2048,416338.0,203.2900390625,208236490 ``` ### runc ``` threads,total_ms,perthread_ms,total_iterations 1,0.0,0.0,0 2,1.0,0.5,0 4,0.0,0.0,4 8,1.0,0.125,29 16,2.0,0.125,834 32,3.0,0.09375,4564 64,5.0,0.078125,18609 128,1548.0,12.09375,767103 256,9798.0,38.2734375,4598115 512,35005.0,68.369140625,17005509 1024,125795.0,122.8466796875,59040260 2048,976892.0,476.998046875,451175271 ```
And from an Azure D32s_v3 (nested virtualization) test:
Show Azure test output
## Thread creation and teardown ### Kata ```json { "tests" : [ { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 1, "setupTotal" : 796, "teardownTotal" : 602, "overallTotal" : 1398 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 5, "setupTotal" : 1064, "teardownTotal" : 472, "overallTotal" : 1536 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 10, "setupTotal" : 614, "teardownTotal" : 247, "overallTotal" : 861 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 1, "setupTotal" : 745, "teardownTotal" : 86, "overallTotal" : 831 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 5, "setupTotal" : 747, "teardownTotal" : 58, "overallTotal" : 805 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 10, "setupTotal" : 745, "teardownTotal" : 96, "overallTotal" : 841 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 1, "setupTotal" : 2266, "teardownTotal" : 30, "overallTotal" : 2296 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 5, "setupTotal" : 1989, "teardownTotal" : 199, "overallTotal" : 2188 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 10, "setupTotal" : 1925, "teardownTotal" : 97, "overallTotal" : 2022 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 1, "setupTotal" : 143069, "teardownTotal" : 86, "overallTotal" : 143155 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 5, "setupTotal" : 212396, "teardownTotal" : 117, "overallTotal" : 212513 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 10, "setupTotal" : 189006, "teardownTotal" : 94, "overallTotal" : 189100 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 1, "setupTotal" : 1047869, "teardownTotal" : 92, "overallTotal" : 1047961 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 5, "setupTotal" : 921905, "teardownTotal" : 110, "overallTotal" : 922015 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 10, "setupTotal" : 684913, "teardownTotal" : 164, "overallTotal" : 685077 } ] } ``` ### runc ```json { "tests" : [ { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 1, "setupTotal" : 777, "teardownTotal" : 905, "overallTotal" : 1682 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 5, "setupTotal" : 767, "teardownTotal" : 309, "overallTotal" : 1076 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 100, "priority" : 10, "setupTotal" : 848, "teardownTotal" : 249, "overallTotal" : 1097 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 1, "setupTotal" : 1038, "teardownTotal" : 263, "overallTotal" : 1301 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 5, "setupTotal" : 1232, "teardownTotal" : 167, "overallTotal" : 1399 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 20, "priority" : 10, "setupTotal" : 1013, "teardownTotal" : 198, "overallTotal" : 1211 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 1, "setupTotal" : 1812, "teardownTotal" : 181, "overallTotal" : 1993 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 5, "setupTotal" : 1924, "teardownTotal" : 172, "overallTotal" : 2096 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 5, "priority" : 10, "setupTotal" : 2015, "teardownTotal" : 201, "overallTotal" : 2216 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 1, "setupTotal" : 2793, "teardownTotal" : 416, "overallTotal" : 3209 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 5, "setupTotal" : 2780, "teardownTotal" : 193, "overallTotal" : 2973 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 3, "priority" : 10, "setupTotal" : 2912, "teardownTotal" : 387, "overallTotal" : 3299 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 1, "setupTotal" : 3601, "teardownTotal" : 197, "overallTotal" : 3798 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 5, "setupTotal" : 3608, "teardownTotal" : 192, "overallTotal" : 3800 } , { "threadCreationTest2": "start", "repeats" : 2, "threads" : 1024, "sleep" : 2, "priority" : 10, "setupTotal" : 3507, "teardownTotal" : 407, "overallTotal" : 3914 } ] } ``` ## Thread yield test ### Kata ```csv threads,total_ms,perthread_ms,total_iterations 1,3.0,3.0,0 2,1.0,0.5,1 4,2.0,0.5,19 8,11.0,1.375,821 16,36.0,2.25,17610 32,236.0,7.375,119209 64,2084.0,32.5625,988165 128,8761.0,68.4453125,4193456 256,18233.0,71.22265625,8739934 512,61560.0,120.234375,28767234 1024,163144.0,159.3203125,77141876 2048,798474.0,389.8798828125,368229556 ``` ### runc ```csv threads,total_ms,perthread_ms,total_iterations 1,1.0,1.0,0 2,1.0,0.5,2 4,1.0,0.25,8 8,1.0,0.125,47 16,3.0,0.1875,1040 32,11.0,0.34375,15044 64,516.0,8.0625,278815 128,2602.0,20.328125,1310917 256,8303.0,32.43359375,3949831 512,32494.0,63.46484375,15817178 1024,91897.0,89.7431640625,54415658 2048,270204.0,131.935546875,129978285 ```