kata-containers / runtime

Kata Containers version 1.x runtime (for version 2.x see https://github.com/kata-containers/kata-containers).
https://katacontainers.io/
Apache License 2.0
2.1k stars 376 forks source link

/dev/ptp0 is missing in Kata 1.11.3 VMs, leading to clock skew #3014

Closed evanfoster closed 3 years ago

evanfoster commented 3 years ago

Description of problem

/dev/ptp0 is missing when launching a Kata pod using Kata 1.11.3. Because this is missing, chronyd can't function and we get clock skew.

I built a debug image and consoled into a running Kata VM. Chronyd wasn't running:

root@clr-6b26754e3b114ec0993809a092cbd421 / # systemctl status chronyd
● chronyd.service - NTP client/server
     Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
  Condition: start condition failed at Wed 2020-10-14 18:52:32 UTC; 13min ago
             └─ ConditionPathExists=/dev/ptp0 was not met
       Docs: man:chronyd(8)
             man:chrony.conf(5)

Here's the output of dmesg:

Show dmesg output

``` [ 0.000000] Linux version 5.6.0 (runner@fv-az8) (gcc version 5.5.0 20171010 (Ubuntu 5.5.0-12ubuntu1~16.04)) #1 SMP Thu Sep 3 19:42:17 UTC 2020 [ 0.000000] Command line: tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 iommu=off cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 debug systemd.show_status=true systemd.log_level=debug panic=1 nr_cpus=16 agent.use_vsock=true systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none agent.debug_console [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 [ 0.000000] x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 [ 0.000000] x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 [ 0.000000] x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 [ 0.000000] x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 [ 0.000000] x86/fpu: Enabled xstate features 0xff, context size is 2560 bytes, using 'compacted' format. [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007ffdefff] usable [ 0.000000] BIOS-e820: [mem 0x000000007ffdf000-0x000000007fffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [ 0.000000] Hypervisor detected: KVM [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: cpu 0, msr 21f9001, primary cpu clock [ 0.000000] kvm-clock: using sched offset of 299242051 cycles [ 0.000003] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns [ 0.000005] tsc: Detected 2095.198 MHz processor [ 0.000734] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [ 0.000736] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.000740] last_pfn = 0x7ffdf max_arch_pfn = 0x400000000 [ 0.000883] MTRR default type: write-back [ 0.000884] MTRR fixed ranges enabled: [ 0.000885] 00000-9FFFF write-back [ 0.000886] A0000-BFFFF uncachable [ 0.000887] C0000-FFFFF write-protect [ 0.000887] MTRR variable ranges enabled: [ 0.000889] 0 base 0080000000 mask FF80000000 uncachable [ 0.000889] 1 disabled [ 0.000890] 2 disabled [ 0.000890] 3 disabled [ 0.000891] 4 disabled [ 0.000891] 5 disabled [ 0.000892] 6 disabled [ 0.000892] 7 disabled [ 0.000920] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.010465] found SMP MP-table at [mem 0x000f5950-0x000f595f] [ 0.010860] BRK [0x02401000, 0x02401fff] PGTABLE [ 0.010939] BRK [0x02402000, 0x02402fff] PGTABLE [ 0.010941] BRK [0x02403000, 0x02403fff] PGTABLE [ 0.011001] BRK [0x02404000, 0x02404fff] PGTABLE [ 0.011130] BRK [0x02405000, 0x02405fff] PGTABLE [ 0.011229] ACPI: Early table checksum verification disabled [ 0.011275] ACPI: RSDP 0x00000000000F5770 000014 (v00 BOCHS ) [ 0.011296] ACPI: RSDT 0x000000007FFE399B 00003C (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001) [ 0.011310] ACPI: FACP 0x000000007FFE2CC2 000074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001) [ 0.011324] ACPI: DSDT 0x000000007FFDF040 003C82 (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001) [ 0.011347] ACPI: FACS 0x000000007FFDF000 000040 [ 0.011350] ACPI: APIC 0x000000007FFE2D36 0000F0 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001) [ 0.011353] ACPI: HPET 0x000000007FFE2E26 000038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001) [ 0.011356] ACPI: SRAT 0x000000007FFE2E5E 0001D0 (v01 BOCHS BXPCSRAT 00000001 BXPC 00000001) [ 0.011360] ACPI: SSDT 0x000000007FFE302E 00088D (v01 BOCHS NVDIMM 00000001 BXPC 00000001) [ 0.011363] ACPI: NFIT 0x000000007FFE38BB 0000E0 (v01 BOCHS BXPCNFIT 00000001 BXPC 00000001) [ 0.011371] ACPI: Local APIC address 0xfee00000 [ 0.011512] Zone ranges: [ 0.011513] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.011514] DMA32 [mem 0x0000000001000000-0x000000007ffdefff] [ 0.011515] Normal empty [ 0.011516] Device empty [ 0.011517] Movable zone start for each node [ 0.011518] Early memory node ranges [ 0.011519] node 0: [mem 0x0000000000001000-0x000000000009efff] [ 0.011520] node 0: [mem 0x0000000000100000-0x000000007ffdefff] [ 0.011572] Zeroed struct page in unavailable ranges: 131 pages [ 0.011573] Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdefff] [ 0.011575] On node 0 totalpages: 524157 [ 0.011576] DMA zone: 64 pages used for memmap [ 0.011577] DMA zone: 21 pages reserved [ 0.011578] DMA zone: 3998 pages, LIFO batch:0 [ 0.012115] DMA32 zone: 8128 pages used for memmap [ 0.012116] DMA32 zone: 520159 pages, LIFO batch:63 [ 0.089851] ACPI: PM-Timer IO Port: 0x608 [ 0.089856] ACPI: Local APIC address 0xfee00000 [ 0.089877] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.089981] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 [ 0.089984] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.089986] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.089987] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.089988] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.089989] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.089991] ACPI: IRQ0 used by override. [ 0.089992] ACPI: IRQ5 used by override. [ 0.089993] ACPI: IRQ9 used by override. [ 0.089994] ACPI: IRQ10 used by override. [ 0.089994] ACPI: IRQ11 used by override. [ 0.089997] Using ACPI (MADT) for SMP configuration information [ 0.089999] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.090030] smpboot: Allowing 16 CPUs, 15 hotplug CPUs [ 0.090062] KVM setup pv remote TLB flush [ 0.090080] [mem 0x80000000-0xfeffbfff] available for PCI devices [ 0.090081] Booting paravirtualized kernel on KVM [ 0.090083] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns [ 0.093764] setup_percpu: NR_CPUS:240 nr_cpumask_bits:240 nr_cpu_ids:16 nr_node_ids:1 [ 0.105252] percpu: Embedded 42 pages/cpu s140120 r0 d31912 u262144 [ 0.105264] pcpu-alloc: s140120 r0 d31912 u262144 alloc=1*2097152 [ 0.105266] pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 [ 0.105356] KVM setup async PF for cpu 0 [ 0.105380] kvm-stealtime: cpu 0, msr 7da17e40 [ 0.105386] Built 1 zonelists, mobility grouping on. Total pages: 515944 [ 0.105389] Kernel command line: tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 iommu=off cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro ro rootfstype=ext4 debug systemd.show_status=true systemd.log_level=debug panic=1 nr_cpus=16 agent.use_vsock=true systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none agent.debug_console [ 0.112249] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) [ 0.115418] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) [ 0.115843] mem auto-init: stack:off, heap alloc:off, heap free:off [ 0.120828] Memory: 2037452K/2096628K available (10242K kernel code, 481K rwdata, 1344K rodata, 828K init, 2152K bss, 59176K reserved, 0K cma-reserved) [ 0.120992] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 [ 0.121042] Kernel/User page tables isolation: enabled [ 0.121485] rcu: Hierarchical RCU implementation. [ 0.121487] rcu: RCU restricting CPUs from NR_CPUS=240 to nr_cpu_ids=16. [ 0.121489] All grace periods are expedited (rcu_expedited). [ 0.121490] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies. [ 0.121491] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 [ 0.121536] NR_IRQS: 15616, nr_irqs: 552, preallocated irqs: 16 [ 0.121864] rcu: Offload RCU callbacks from CPUs: (none). [ 0.121935] random: get_random_bytes called from start_kernel+0x292/0x44a with crng_init=0 [ 0.122304] Console: colour *CGA 80x25 [ 0.122313] ACPI: Core revision 20200110 [ 0.122766] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns [ 0.123023] APIC: Switch to symmetric I/O mode setup [ 0.123939] x2apic enabled [ 0.124848] Switched APIC routing to physical x2apic. [ 0.124855] KVM setup pv IPIs [ 0.128866] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.128944] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1e337849509, max_idle_ns: 440795281251 ns [ 0.128947] Calibrating delay loop (skipped) preset value.. 4190.39 BogoMIPS (lpj=8380792) [ 0.128949] pid_max: default: 32768 minimum: 301 [ 0.128972] LSM: Security Framework initializing [ 0.129068] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) [ 0.129141] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) [ 0.130523] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.130540] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.130541] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 [ 0.130542] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [ 0.130546] Spectre V2 : Mitigation: Full generic retpoline [ 0.130547] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 0.130548] Speculative Store Bypass: Vulnerable [ 0.130549] TAA: Mitigation: Clear CPU buffers [ 0.130550] MDS: Mitigation: Clear CPU buffers [ 0.131032] Freeing SMP alternatives memory: 28K [ 0.133534] TSC deadline timer enabled [ 0.133582] smpboot: CPU0: Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x4) [ 0.133945] Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. [ 0.134146] rcu: Hierarchical SRCU implementation. [ 0.135218] smp: Bringing up secondary CPUs ... [ 0.135220] smp: Brought up 1 node, 1 CPU [ 0.135221] smpboot: Max logical packages: 16 [ 0.135222] smpboot: Total of 1 processors activated (4190.39 BogoMIPS) [ 0.136147] devtmpfs: initialized [ 0.136186] x86/mm: Memory block size: 128MB [ 0.136657] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns [ 0.136661] futex hash table entries: 4096 (order: 6, 262144 bytes, linear) [ 0.136946] thermal_sys: Registered thermal governor 'step_wise' [ 0.136977] NET: Registered protocol family 16 [ 0.137149] cpuidle: using governor menu [ 0.137261] ACPI: bus type PCI registered [ 0.137262] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 0.137349] PCI: Using configuration type 1 for base access [ 0.138451] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages [ 0.138734] ACPI: Added _OSI(Module Device) [ 0.138736] ACPI: Added _OSI(Processor Device) [ 0.138737] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.138739] ACPI: Added _OSI(Processor Aggregator Device) [ 0.138741] ACPI: Added _OSI(Linux-Dell-Video) [ 0.138742] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) [ 0.138744] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) [ 0.140896] ACPI: 2 ACPI AML tables successfully acquired and loaded [ 0.143167] ACPI: Interpreter enabled [ 0.143172] ACPI: (supports S0 S5) [ 0.143173] ACPI: Using IOAPIC for interrupt routing [ 0.143184] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.143545] ACPI: Enabled 4 GPEs in block 00 to 0F [ 0.161164] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.161169] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] [ 0.161643] acpiphp: Slot [3] registered [ 0.161683] acpiphp: Slot [4] registered [ 0.161720] acpiphp: Slot [5] registered [ 0.161758] acpiphp: Slot [6] registered [ 0.161808] acpiphp: Slot [7] registered [ 0.161846] acpiphp: Slot [8] registered [ 0.161884] acpiphp: Slot [9] registered [ 0.161932] acpiphp: Slot [10] registered [ 0.161970] acpiphp: Slot [11] registered [ 0.162007] acpiphp: Slot [12] registered [ 0.162044] acpiphp: Slot [13] registered [ 0.162093] acpiphp: Slot [14] registered [ 0.162130] acpiphp: Slot [15] registered [ 0.162167] acpiphp: Slot [16] registered [ 0.162204] acpiphp: Slot [17] registered [ 0.162241] acpiphp: Slot [18] registered [ 0.162279] acpiphp: Slot [19] registered [ 0.162327] acpiphp: Slot [20] registered [ 0.162364] acpiphp: Slot [21] registered [ 0.162411] acpiphp: Slot [22] registered [ 0.162448] acpiphp: Slot [23] registered [ 0.162485] acpiphp: Slot [24] registered [ 0.162522] acpiphp: Slot [25] registered [ 0.162571] acpiphp: Slot [26] registered [ 0.162608] acpiphp: Slot [27] registered [ 0.162645] acpiphp: Slot [28] registered [ 0.162682] acpiphp: Slot [29] registered [ 0.162718] acpiphp: Slot [30] registered [ 0.162755] acpiphp: Slot [31] registered [ 0.162789] PCI host bridge to bus 0000:00 [ 0.162791] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] [ 0.162793] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] [ 0.162794] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 0.162796] pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] [ 0.162797] pci_bus 0000:00: root bus resource [mem 0x1d00000000-0x1f00207fff window] [ 0.162798] pci_bus 0000:00: root bus resource [bus 00-ff] [ 0.162920] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 [ 0.164367] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 [ 0.166507] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 [ 0.170646] pci 0000:00:01.1: reg 0x20: [io 0xd100-0xd10f] [ 0.172941] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] [ 0.172943] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] [ 0.172944] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] [ 0.172945] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] [ 0.173434] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 [ 0.175170] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [ 0.175201] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 0.175984] pci 0000:00:02.0: [1b36:0001] type 01 class 0x060400 [ 0.177682] pci 0000:00:02.0: reg 0x10: [mem 0x1d00000000-0x1d000000ff 64bit] [ 0.180958] pci 0000:00:03.0: [1af4:1003] type 00 class 0x078000 [ 0.182354] pci 0000:00:03.0: reg 0x10: [io 0xd000-0xd03f] [ 0.184048] pci 0000:00:03.0: reg 0x14: [mem 0xfea00000-0xfea00fff] [ 0.190842] pci 0000:00:04.0: [1af4:1004] type 00 class 0x010000 [ 0.192690] pci 0000:00:04.0: reg 0x10: [io 0xd040-0xd07f] [ 0.194444] pci 0000:00:04.0: reg 0x14: [mem 0xfea01000-0xfea01fff] [ 0.200542] pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 [ 0.201960] pci 0000:00:05.0: reg 0x10: [io 0xd0c0-0xd0df] [ 0.206932] pci 0000:00:05.0: reg 0x20: [mem 0x1f00200000-0x1f00203fff 64bit pref] [ 0.209777] pci 0000:00:06.0: [1af4:1012] type 00 class 0x078000 [ 0.210946] pci 0000:00:06.0: reg 0x10: [io 0xd0e0-0xd0ff] [ 0.211891] pci 0000:00:06.0: reg 0x14: [mem 0xfea02000-0xfea02fff] [ 0.218689] pci 0000:00:07.0: [1af4:105a] type 00 class 0x018000 [ 0.229333] pci 0000:00:07.0: reg 0x14: [mem 0xfea03000-0xfea03fff] [ 0.233382] pci 0000:00:07.0: reg 0x18: [mem 0x1e00000000-0x1effffffff 64bit pref] [ 0.237347] pci 0000:00:07.0: reg 0x20: [mem 0x1f00204000-0x1f00207fff 64bit pref] [ 0.244946] pci 0000:00:08.0: [1af4:1000] type 00 class 0x020000 [ 0.246173] pci 0000:00:08.0: reg 0x10: [io 0xd080-0xd0bf] [ 0.247072] pci 0000:00:08.0: reg 0x14: [mem 0xfea04000-0xfea04fff] [ 0.254843] pci_bus 0000:01: extended config space not accessible [ 0.255276] acpiphp: Slot [0] registered [ 0.255342] acpiphp: Slot [1] registered [ 0.255380] acpiphp: Slot [2] registered [ 0.255429] acpiphp: Slot [3-2] registered [ 0.255491] acpiphp: Slot [4-2] registered [ 0.255539] acpiphp: Slot [5-2] registered [ 0.255578] acpiphp: Slot [6-2] registered [ 0.255625] acpiphp: Slot [7-2] registered [ 0.255665] acpiphp: Slot [8-2] registered [ 0.255723] acpiphp: Slot [9-2] registered [ 0.255775] acpiphp: Slot [10-2] registered [ 0.255823] acpiphp: Slot [11-2] registered [ 0.255863] acpiphp: Slot [12-2] registered [ 0.255910] acpiphp: Slot [13-2] registered [ 0.255950] acpiphp: Slot [14-2] registered [ 0.255997] acpiphp: Slot [15-2] registered [ 0.256048] acpiphp: Slot [16-2] registered [ 0.256095] acpiphp: Slot [17-2] registered [ 0.256135] acpiphp: Slot [18-2] registered [ 0.256182] acpiphp: Slot [19-2] registered [ 0.256222] acpiphp: Slot [20-2] registered [ 0.256268] acpiphp: Slot [21-2] registered [ 0.256309] acpiphp: Slot [22-2] registered [ 0.256366] acpiphp: Slot [23-2] registered [ 0.256406] acpiphp: Slot [24-2] registered [ 0.256452] acpiphp: Slot [25-2] registered [ 0.256492] acpiphp: Slot [26-2] registered [ 0.256539] acpiphp: Slot [27-2] registered [ 0.256588] acpiphp: Slot [28-2] registered [ 0.256646] acpiphp: Slot [29-2] registered [ 0.256697] acpiphp: Slot [30-2] registered [ 0.256738] acpiphp: Slot [31-2] registered [ 0.257686] pci 0000:00:02.0: PCI bridge to [bus 01] [ 0.257745] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 0.257804] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 0.257919] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 0.258736] pci_bus 0000:00: on NUMA node 0 [ 0.259696] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) [ 0.259931] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) [ 0.260118] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) [ 0.260309] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) [ 0.260423] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [ 0.265753] vgaarb: loaded [ 0.266481] SCSI subsystem initialized [ 0.266637] PCI: Using ACPI for IRQ routing [ 0.266645] PCI: pci_cache_line_size set to 64 bytes [ 0.267323] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] [ 0.267332] e820: reserve RAM buffer [mem 0x7ffdf000-0x7fffffff] [ 0.268485] clocksource: Switched to clocksource kvm-clock [ 0.268642] pnp: PnP ACPI init [ 0.268642] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.268642] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active) [ 0.268642] pnp 00:02: Plug and Play ACPI device, IDs PNP0f13 (active) [ 0.268642] pnp 00:03: [dma 2] [ 0.268642] pnp 00:03: Plug and Play ACPI device, IDs PNP0700 (active) [ 0.269120] pnp: PnP ACPI: found 4 devices [ 0.275970] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns [ 0.275998] pci 0000:00:02.0: PCI bridge to [bus 01] [ 0.276032] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 0.276821] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 0.277332] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 0.278271] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] [ 0.278273] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] [ 0.278274] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] [ 0.278275] pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] [ 0.278277] pci_bus 0000:00: resource 8 [mem 0x1d00000000-0x1f00207fff window] [ 0.278278] pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] [ 0.278280] pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] [ 0.278281] pci_bus 0000:01: resource 2 [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 0.278455] NET: Registered protocol family 2 [ 0.279102] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) [ 0.279142] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) [ 0.279450] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) [ 0.280002] TCP: Hash tables configured (established 16384 bind 16384) [ 0.280525] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) [ 0.280596] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) [ 0.280805] NET: Registered protocol family 1 [ 0.280886] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 0.280927] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 0.281001] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 0.281346] PCI: CLS 0 bytes, default 64 [ 0.281746] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1e337849509, max_idle_ns: 440795281251 ns [ 0.282663] workingset: timestamp_bits=46 max_order=19 bucket_order=0 [ 0.285882] fuse: init (API version 7.31) [ 0.286026] SGI XFS with security attributes, no debug enabled [ 0.286726] 9p: Installing v9fs 9p2000 file system support [ 0.286905] NET: Registered protocol family 38 [ 0.286917] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) [ 0.287646] shpchp 0000:00:02.0: Requesting control of SHPC hotplug via OSHP (\_SB_.PCI0.S10_) [ 0.287651] shpchp 0000:00:02.0: Requesting control of SHPC hotplug via OSHP (\_SB_.PCI0) [ 0.287654] shpchp 0000:00:02.0: Cannot get control of SHPC hotplug [ 0.287660] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 0.287783] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 0.287800] ACPI: Power Button [PWRF] [ 0.310904] PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 0.311059] virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver [ 0.330161] PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 0.330323] virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver [ 0.348766] PCI Interrupt Link [LNKA] enabled at IRQ 10 [ 0.369955] PCI Interrupt Link [LNKB] enabled at IRQ 11 [ 0.370109] virtio-pci 0000:00:06.0: virtio_pci: leaving for legacy driver [ 0.396164] virtiofs virtio4: Cache len: 0x100000000 @ 0x1e00000000 [ 0.549296] memmap_init_zone_device initialised 1048576 pages in 152ms [ 0.569012] virtio-pci 0000:00:08.0: virtio_pci: leaving for legacy driver [ 0.569818] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled [ 0.600263] printk: console [hvc0] enabled [ 0.602728] random: fast init done [ 0.603022] random: crng init done [ 0.604428] brd: module loaded [ 0.608042] loop: module loaded [ 0.619313] scsi host0: Virtio SCSI HBA [ 0.623443] intel_pstate: CPU model not supported [ 0.624807] xt_time: kernel timezone is -0000 [ 0.625027] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 0.626432] memmap_init_zone_device initialised 65536 pages in 16ms [ 0.626779] pmem0: p1 [ 0.626956] pmem0: detected capacity change from 0 to 266338304 [ 0.627098] IPVS: Connection hash table configured (size=4096, memory=64Kbytes) [ 0.627365] IPVS: ipvs loaded. [ 0.627522] IPVS: [rr] scheduler registered. [ 0.627627] IPVS: [wrr] scheduler registered. [ 0.627706] IPVS: [lc] scheduler registered. [ 0.627775] IPVS: [wlc] scheduler registered. [ 0.627851] IPVS: [fo] scheduler registered. [ 0.627933] IPVS: [ovf] scheduler registered. [ 0.628022] IPVS: [lblc] scheduler registered. [ 0.628142] IPVS: [lblcr] scheduler registered. [ 0.628344] IPVS: [dh] scheduler registered. [ 0.628426] IPVS: [sh] scheduler registered. [ 0.628503] IPVS: [sed] scheduler registered. [ 0.628587] IPVS: [nq] scheduler registered. [ 0.628687] IPVS: ftp: loaded support on port[0] = 21 [ 0.628784] IPVS: [sip] pe registered. [ 0.629740] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully [ 0.630059] Initializing XFRM netlink socket [ 0.630362] NET: Registered protocol family 10 [ 0.631269] Segment Routing with IPv6 [ 0.631418] NET: Registered protocol family 17 [ 0.631630] 9pnet: Installing 9P2000 support [ 0.631814] NET: Registered protocol family 40 [ 0.635261] IPI shorthand broadcast: enabled [ 0.635461] sched_clock: Marking stable (625968261, 7069285)->(663162659, -30125113) [ 0.636263] EXT4-fs (pmem0p1): DAX enabled. Warning: EXPERIMENTAL, use at your own risk [ 0.637890] EXT4-fs (pmem0p1): mounted filesystem with ordered data mode. Opts: dax,data=ordered,errors=remount-ro [ 0.638066] VFS: Mounted root (ext4 filesystem) readonly on device 259:1. [ 0.638359] devtmpfs: mounted [ 0.639528] Freeing unused kernel image (initmem) memory: 828K [ 0.641031] Write protecting the kernel read-only data: 14336k [ 0.645210] Freeing unused kernel image (text/rodata gap) memory: 2044K [ 0.646708] Freeing unused kernel image (rodata/data gap) memory: 704K [ 0.646851] Run /sbin/init as init process [ 0.646922] with arguments: [ 0.646992] /sbin/init [ 0.647039] with environment: [ 0.647101] HOME=/ [ 0.647148] TERM=linux [ 0.716482] systemd[1]: systemd 246 running in system mode. (+PAM +AUDIT -SELINUX +IMA -APPARMOR -SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +ZSTD +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid) [ 0.717217] systemd[1]: No virtualization found in DMI [ 0.717613] systemd[1]: Virtualization found, CPUID=KVMKVMKVM [ 0.717806] systemd[1]: Found VM virtualization kvm [ 0.717965] systemd[1]: Detected virtualization kvm. [ 0.718169] systemd[1]: Detected architecture x86-64. [ 0.718828] systemd[1]: Mounting cgroup to /sys/fs/cgroup/memory of type cgroup with options memory. [ 0.719347] systemd[1]: Mounting cgroup to /sys/fs/cgroup/devices of type cgroup with options devices. [ 0.719848] systemd[1]: Mounting cgroup to /sys/fs/cgroup/freezer of type cgroup with options freezer. [ 0.720230] systemd[1]: Mounting cgroup to /sys/fs/cgroup/cpu,cpuacct of type cgroup with options cpu,cpuacct. [ 0.720895] systemd[1]: Mounting cgroup to /sys/fs/cgroup/pids of type cgroup with options pids. [ 0.721336] systemd[1]: Mounting cgroup to /sys/fs/cgroup/cpuset of type cgroup with options cpuset. [ 0.721955] systemd[1]: Mounting cgroup to /sys/fs/cgroup/blkio of type cgroup with options blkio. [ 0.722278] systemd[1]: Mounting cgroup to /sys/fs/cgroup/net_cls,net_prio of type cgroup with options net_cls,net_prio. [ 0.722642] systemd[1]: Mounting cgroup to /sys/fs/cgroup/perf_event of type cgroup with options perf_event. [ 0.724003] systemd[1]: No hostname configured. [ 0.724182] systemd[1]: Set hostname to . [ 0.724566] systemd[1]: Initializing machine ID from random generator. [ 0.724877] systemd[1]: Installed transient /etc/machine-id file. [ 0.726039] systemd[1]: Successfully added address 127.0.0.1 to loopback interface [ 0.726982] systemd[1]: Successfully added address ::1 to loopback interface [ 0.727176] systemd[1]: Successfully brought loopback interface up [ 0.727608] systemd[1]: Setting 'fs/file-max' to '9223372036854775807'. [ 0.727833] systemd[1]: Setting 'fs/nr_open' to '2147483640'. [ 0.727988] systemd[1]: Couldn't write fs.nr_open as 2147483640, halving it. [ 0.728185] systemd[1]: Setting 'fs/nr_open' to '1073741816'. [ 0.728327] systemd[1]: Successfully bumped fs.nr_open to 1073741816 [ 0.728664] systemd[1]: Found cgroup2 on /sys/fs/cgroup/unified, unified hierarchy for systemd controller [ 0.730432] systemd[1]: Found cgroup2 on /sys/fs/cgroup/unified, unified hierarchy for systemd controller [ 0.730670] systemd[1]: Unified cgroup hierarchy is located at /sys/fs/cgroup/unified. Controllers are on legacy hierarchies. [ 0.734107] systemd[1]: Got EBADF when using BPF_F_ALLOW_MULTI, which indicates it is supported. Yay! [ 0.734472] systemd[1]: Controller 'cpu' supported: yes [ 0.734608] systemd[1]: Controller 'cpuacct' supported: yes [ 0.734742] systemd[1]: Controller 'cpuset' supported: no [ 0.734862] systemd[1]: Controller 'io' supported: no [ 0.734976] systemd[1]: Controller 'blkio' supported: yes [ 0.735114] systemd[1]: Controller 'memory' supported: yes [ 0.735258] systemd[1]: Controller 'devices' supported: yes [ 0.735390] systemd[1]: Controller 'pids' supported: yes [ 0.735527] systemd[1]: Controller 'bpf-firewall' supported: yes [ 0.735657] systemd[1]: Controller 'bpf-devices' supported: yes [ 0.735867] systemd[1]: Set up TFD_TIMER_CANCEL_ON_SET timerfd. [ 0.736100] systemd[1]: Failed to stat /etc/localtime, ignoring: No such file or directory [ 0.736383] systemd[1]: /etc/localtime doesn't exist yet, watching /etc instead. [ 0.737001] systemd[1]: Enabling (yes) showing of status (commandline). [ 0.738346] systemd[1]: Successfully forked off '(sd-executor)' as PID 34. [ 0.740743] systemd[34]: Successfully forked off '(direxec)' as PID 35. [ 0.741919] systemd[34]: Successfully forked off '(direxec)' as PID 36. [ 0.743038] systemd[34]: Successfully forked off '(direxec)' as PID 37. [ 0.783060] systemd[34]: /usr/lib/systemd/system-generators/systemd-debug-generator succeeded. [ 0.785912] systemd[34]: /usr/lib/systemd/system-generators/systemd-run-generator succeeded. [ 0.786127] systemd[34]: /usr/lib/systemd/system-generators/systemd-cryptsetup-generator succeeded. [ 0.786432] systemd[1]: (sd-executor) succeeded. [ 0.786568] systemd[1]: Looking for unit files in (higher priority first): [ 0.786692] systemd[1]: /etc/systemd/system.control [ 0.786782] systemd[1]: /run/systemd/system.control [ 0.786858] systemd[1]: /run/systemd/transient [ 0.786913] systemd[1]: /run/systemd/generator.early [ 0.786983] systemd[1]: /etc/systemd/system [ 0.787081] systemd[1]: /etc/systemd/system.attached [ 0.787194] systemd[1]: /run/systemd/system [ 0.787299] systemd[1]: /run/systemd/system.attached [ 0.787400] systemd[1]: /run/systemd/generator [ 0.787487] systemd[1]: /usr/local/lib/systemd/system [ 0.787578] systemd[1]: /usr/lib/systemd/system [ 0.787659] systemd[1]: /run/systemd/generator.late [ 0.790075] systemd[1]: Modification times have changed, need to update cache. [ 0.790444] systemd[1]: unit_file_build_name_map: linked unit file: /run/systemd/generator.early/systemd-networkd.socket → /dev/null [ 0.790709] systemd[1]: unit_file_build_name_map: linked unit file: /run/systemd/generator.early/systemd-networkd.service → /dev/null [ 0.791265] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-backlight@.service [ 0.791482] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-exit.service [ 0.791702] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/initrd-root-device.target [ 0.791881] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/remote-fs.target [ 0.792100] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-ask-password-wall.service [ 0.792312] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/initrd-udevadm-cleanup-db.service [ 0.792548] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-tmpfiles-clean.service [ 0.792712] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/machine.slice [ 0.792852] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/time-set.target [ 0.793124] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/blockdev@.target [ 0.793281] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-repart.service [ 0.793466] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-timesyncd-fix-localstatedir.service [ 0.793684] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/shutdown.target [ 0.793855] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-bless-boot.service [ 0.794085] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/quotaon.service [ 0.794277] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/console-getty.service [ 0.794461] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/network-pre.target [ 0.794632] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/cryptsetup.target [ 0.794813] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/container-getty@.service [ 0.795072] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-remount-fs.service [ 0.795342] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-poweroff.service [ 0.795529] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-userdbd.service [ 0.795718] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/suspend-then-hibernate.target [ 0.795929] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/sleep.target [ 0.796078] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/initrd-cleanup.service [ 0.796281] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-resolved.service [ 0.796455] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/dynamic-trust-store.service [ 0.796667] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-machine-id-commit.service [ 0.796877] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-rfkill.socket [ 0.797102] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/nss-user-lookup.target [ 0.797264] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/proc-sys-fs-binfmt_misc.mount [ 0.797468] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/initrd-switch-root.service [ 0.797666] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/slices.target [ 0.797855] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/machines.target [ 0.798058] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/uuidd.service [ 0.798244] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-boot-system-token.service [ 0.798469] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/sound.target [ 0.798702] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/uuidd.socket [ 0.798901] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/initrd-root-fs.target [ 0.799080] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-journald-audit.socket [ 0.799283] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/iptables-save.service [ 0.799409] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-networkd-wait-online.service [ 0.799619] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/network-online.target [ 0.799756] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-ask-password-wall.path [ 0.800063] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/ip6tables-restore.service [ 0.800226] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-quotacheck.service [ 0.800416] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-journald@.socket [ 0.800578] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/exit.target [ 0.800716] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/hybrid-sleep.target [ 0.800902] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/kexec.target [ 0.801089] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-rfkill.service [ 0.801257] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/var-swapfile.swap [ 0.801401] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-ask-password-console.path [ 0.801563] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-machined.service [ 0.801715] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-journald@.service [ 0.801857] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-kexec.service [ 0.801996] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/getty@.service [ 0.802114] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/sys-kernel-tracing.mount [ 0.802265] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/fstrim.timer [ 0.802394] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/local-fs-pre.target [ 0.802689] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/user.slice [ 0.802821] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/bluetooth.target [ 0.802971] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-journal-flush-msft.service [ 0.803131] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/nss-lookup.target [ 0.803283] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/system-update-pre.target [ 0.803574] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/rpcbind.target [ 0.803751] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-initctl.service [ 0.803959] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-volatile-root.service [ 0.804170] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-fsck-root.service [ 0.804378] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/cryptsetup-pre.target [ 0.804559] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/timers.target [ 0.804708] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/getty.target [ 0.804871] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/initrd-switch-root.target [ 0.805272] systemd[1]: unit_file_build_name_map: alias: /usr/lib/systemd/system/dbus-org.freedesktop.resolve1.service → systemd-resolved.service [ 0.805435] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/halt.target [ 0.805664] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/iptables-restore.service [ 0.805863] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-hibernate-resume@.service [ 0.806092] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-homed.service [ 0.806317] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/chronyd.service [ 0.806481] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/sigpwr.target [ 0.806655] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-fsck@.service [ 0.806842] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-pstore.service [ 0.807048] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/system-systemd\x2dcryptsetup.slice [ 0.807284] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/sockets.target [ 0.807550] systemd[1]: unit_file_build_name_map: alias: /usr/lib/systemd/system/dbus-org.freedesktop.locale1.service → systemd-localed.service [ 0.807743] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/user@.service [ 0.807920] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/getty-pre.target [ 0.808110] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-hibernate.service [ 0.808355] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-suspend.service [ 0.808539] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/paths.target [ 0.808695] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/initrd.target [ 0.808863] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-user-sessions.service [ 0.809334] systemd[1]: unit_file_build_name_map: alias: /usr/lib/systemd/system/dbus-org.freedesktop.login1.service → systemd-logind.service [ 0.809560] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/kmod-static-nodes.service [ 0.809837] systemd[1]: unit_file_build_name_map: alias: /usr/lib/systemd/system/dbus-org.freedesktop.machine1.service → systemd-machined.service [ 0.810105] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/initrd-fs.target [ 0.810823] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-suspend-then-hibernate.service [ 0.811044] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-coredump.socket [ 0.811232] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/poweroff.target [ 0.811483] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/initrd-parse-etc.service [ 0.811661] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-udev-settle.service [ 0.811854] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/local-fs.target [ 0.812018] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/serial-getty@.service [ 0.812233] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-vconsole-setup.service [ 0.812445] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/kata-agent.service [ 0.812604] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-time-wait-sync.service [ 0.812764] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/emergency.target [ 0.812941] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/debug-shell.service [ 0.813178] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/time-sync.target [ 0.813387] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-ask-password-console.service [ 0.813657] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/graphical.target [ 0.813848] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-tmpfiles-clean.timer [ 0.814052] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-hybrid-sleep.service [ 0.814362] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/proc-sys-fs-binfmt_misc.automount [ 0.814577] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/hibernate.target [ 0.814764] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/sysinit.target [ 0.814950] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-portabled.service [ 0.815138] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-network-generator.service [ 0.815316] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/boot-complete.target [ 0.815471] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/usb-gadget.target [ 0.815662] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/ip6tables-save.service [ 0.815832] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/tmp.mount [ 0.816113] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/remote-cryptsetup.target [ 0.816342] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/kata-containers.target [ 0.816545] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/user-runtime-dir@.service [ 0.816874] systemd[1]: unit_file_build_name_map: alias: /usr/lib/systemd/system/dbus-org.freedesktop.hostname1.service → systemd-hostnamed.service [ 0.817399] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/remote-fs-pre.target [ 0.817656] systemd[1]: unit_file_build_name_map: alias: /usr/lib/systemd/system/default.target → graphical.target [ 0.817869] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-timedated.service [ 0.818110] systemd[1]: unit_file_build_name_map: alias: /usr/lib/systemd/system/dbus-org.freedesktop.timedate1.service → systemd-timedated.service [ 0.818379] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/emergency.service [ 0.818588] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/fstrim.service [ 0.818750] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-userdbd.socket [ 0.818939] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-binfmt.service [ 0.819118] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/systemd-logind.service [ 0.819324] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/reboot.target [ 0.819458] systemd[1]: unit_file_build_name_map: normal unit file: /usr/lib/systemd/system/final.target [ 1.075083] fuse_dax_mem_range_init(): dax mapped 1048576 pages. nr_ranges=2048 [ 1.089863] pci 0000:00:02.0: PCI bridge to [bus 01] [ 1.090062] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 1.091478] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 1.092458] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 1.613971] pci 0000:00:02.0: PCI bridge to [bus 01] [ 1.614181] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 1.615687] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 1.616700] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 1.621942] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality. [ 1.633037] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 1.693071] CPU1 has been hot-added [ 1.696850] x86: Booting SMP configuration: [ 1.697003] smpboot: Booting Node 0 Processor 1 APIC 0xf [ 1.698935] kvm-clock: cpu 1, msr 21f9041, secondary cpu clock [ 1.700113] smpboot: CPU 1 Converting physical 15 to logical package 1 [ 1.700114] smpboot: CPU 1 Converting physical 0 to logical die 1 [ 1.703358] KVM setup async PF for cpu 1 [ 1.703690] kvm-stealtime: cpu 1, msr 7da57e40 [ 1.707502] Will online and init hotplugged CPU: 1 [ 1.714606] Built 1 zonelists, mobility grouping on. Total pages: 540590 [ 2.066335] pci 0000:00:02.0: PCI bridge to [bus 01] [ 2.066584] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 2.068014] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 2.069262] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 2.706478] pci 0000:00:02.0: PCI bridge to [bus 01] [ 2.706756] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 2.708315] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 2.709402] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 2.780025] CPU2 has been hot-added [ 2.841832] smpboot: Booting Node 0 Processor 2 APIC 0xe [ 2.843397] kvm-clock: cpu 2, msr 21f9081, secondary cpu clock [ 2.844352] smpboot: CPU 2 Converting physical 14 to logical package 2 [ 2.844353] smpboot: CPU 2 Converting physical 0 to logical die 2 [ 2.844755] KVM setup async PF for cpu 2 [ 2.845102] kvm-stealtime: cpu 2, msr 7da97e40 [ 2.845466] Will online and init hotplugged CPU: 2 [ 3.189840] pci 0000:00:02.0: PCI bridge to [bus 01] [ 3.190084] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 3.192401] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 3.193529] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 3.728423] pci 0000:00:02.0: PCI bridge to [bus 01] [ 3.728636] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 3.730150] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 3.731148] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 3.800126] CPU3 has been hot-added [ 3.801697] smpboot: Booting Node 0 Processor 3 APIC 0xd [ 3.803369] kvm-clock: cpu 3, msr 21f90c1, secondary cpu clock [ 3.804357] smpboot: CPU 3 Converting physical 13 to logical package 3 [ 3.804359] smpboot: CPU 3 Converting physical 0 to logical die 3 [ 3.806873] KVM setup async PF for cpu 3 [ 3.807202] kvm-stealtime: cpu 3, msr 7dad7e40 [ 3.810731] Will online and init hotplugged CPU: 3 [ 4.290814] pci 0000:00:02.0: PCI bridge to [bus 01] [ 4.291026] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 4.292689] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 4.293948] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 4.818387] pci 0000:00:02.0: PCI bridge to [bus 01] [ 4.818585] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 4.820229] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 4.821405] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 4.903138] CPU4 has been hot-added [ 4.906007] smpboot: Booting Node 0 Processor 4 APIC 0xc [ 4.908057] kvm-clock: cpu 4, msr 21f9101, secondary cpu clock [ 4.915889] smpboot: CPU 4 Converting physical 12 to logical package 4 [ 4.915891] smpboot: CPU 4 Converting physical 0 to logical die 4 [ 4.916354] KVM setup async PF for cpu 4 [ 4.916691] kvm-stealtime: cpu 4, msr 7db17e40 [ 4.917152] Will online and init hotplugged CPU: 4 [ 5.419139] pci 0000:00:02.0: PCI bridge to [bus 01] [ 5.419437] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 5.421128] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 5.422352] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 6.048388] pci 0000:00:02.0: PCI bridge to [bus 01] [ 6.048576] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 6.050469] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 6.057178] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 6.159126] CPU5 has been hot-added [ 6.162499] smpboot: Booting Node 0 Processor 5 APIC 0xb [ 6.164330] kvm-clock: cpu 5, msr 21f9141, secondary cpu clock [ 6.167827] smpboot: CPU 5 Converting physical 11 to logical package 5 [ 6.167828] smpboot: CPU 5 Converting physical 0 to logical die 5 [ 6.168294] KVM setup async PF for cpu 5 [ 6.168604] kvm-stealtime: cpu 5, msr 7db57e40 [ 6.171355] Will online and init hotplugged CPU: 5 [ 6.568608] pci 0000:00:02.0: PCI bridge to [bus 01] [ 6.568824] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 6.571155] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 6.572668] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 7.308640] pci 0000:00:02.0: PCI bridge to [bus 01] [ 7.308845] pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] [ 7.310454] pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] [ 7.311536] pci 0000:00:02.0: bridge window [mem 0x1f00000000-0x1f001fffff 64bit pref] [ 7.418770] CPU6 has been hot-added [ 7.420574] smpboot: Booting Node 0 Processor 6 APIC 0xa [ 7.422647] kvm-clock: cpu 6, msr 21f9181, secondary cpu clock [ 7.423838] smpboot: CPU 6 Converting physical 10 to logical package 6 [ 7.423839] smpboot: CPU 6 Converting physical 0 to logical die 6 [ 7.428192] KVM setup async PF for cpu 6 [ 7.428500] kvm-stealtime: cpu 6, msr 7db97e40 [ 7.429685] Will online and init hotplugged CPU: 6 [ 46.188007] systemd[1]: Bus private-bus-connection: changing state UNSET → OPENING [ 46.188314] systemd[1]: Bus private-bus-connection: changing state OPENING → AUTHENTICATING [ 46.188658] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1 iface=org.freedesktop.systemd1.Manager [ 46.189270] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/job iface=org.freedesktop.systemd1.Job [ 46.189512] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Unit [ 46.189864] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Automount [ 46.196674] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Device [ 46.201855] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Mount [ 46.202372] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Path [ 46.202582] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Scope [ 46.207726] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Service [ 46.209221] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Slice [ 46.209679] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Socket [ 46.210176] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Swap [ 46.210581] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Target [ 46.210848] systemd[1]: Registering bus object implementation for path=/org/freedesktop/systemd1/unit iface=org.freedesktop.systemd1.Timer [ 46.211113] systemd[1]: Registering bus object implementation for path=/org/freedesktop/LogControl1 iface=org.freedesktop.LogControl1 [ 46.211309] systemd[1]: Accepted new private connection. [ 46.211960] systemd[1]: Bus private-bus-connection: changing state AUTHENTICATING → RUNNING [ 46.212216] systemd[1]: Got message type=method_call sender=n/a destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1/unit/chronyd_2eservice interface=org.freedesktop.DBus.Properties member=GetAll cookie=1 reply_cookie=0 signature=s error-name=n/a error-message=n/a [ 46.213222] systemd[1]: Failed to read pids.max attribute of cgroup root, ignoring: No data available [ 46.214003] systemd[1]: Found unit chronyd.service at /usr/lib/systemd/system/chronyd.service (regular file) [ 46.215404] systemd[1]: Preset files don't specify rule for chronyd.service. Enabling. [ 46.215696] systemd[1]: Sent message type=method_return sender=org.freedesktop.systemd1 destination=n/a path=n/a interface=n/a member=n/a cookie=1 reply_cookie=1 signature=a{sv} error-name=n/a error-message=n/a [ 46.217156] systemd[1]: Bus private-bus-connection: changing state RUNNING → CLOSING [ 46.217410] systemd[1]: Bus private-bus-connection: changing state CLOSING → CLOSED [ 46.217654] systemd[1]: Got disconnect on private connection. ```

/var/lib/osbuilder/osbuilder.yaml:

---
osbuilder:
  url: "https://github.com/kata-containers/osbuilder"
  version: "unknown"
rootfs-creation-time: "2020-10-14T16:40:39.891958588+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
  name: "Clear"
  version: "33830"
  packages:
    default:
      - "chrony"
      - "iptables-bin"
      - "kmod-bin"
      - "libudev0-shim"
      - "systemd"
      - "util-linux-bin"
    extra:
      - "bash"
      - "coreutils"
agent:
  url: "https://github.com/kata-containers/agent"
  name: "kata-agent"
  version: "1.12.0-alpha1-f9eab0fe9adb34e4f9f4a11f42a3eff983fd0659"
  agent-is-init-daemon: "no"

Related to #1279

EDIT: I should note that I'm running 1.11.3 using https://gitlab.com/virtio-fs/qemu/-/tree/qemu5.0-virtiofs-dax because of #2795. 1.11.3 doesn't ship with QEMU 5.0, so this issue wouldn't have been caught by this test for 1.11.3. This isn't an officially released combination, but I'm hoping I can still get some help figuring this out, because this will probably affect 1.12.

evanfoster commented 3 years ago
Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.11.3 (commit d6883bdaeeb98a8921ac215533b780d697d6883a)` at `2020-10-14.17:04:48.149973033+0000`. --- Runtime is `/opt/kata/bin/kata-runtime`. # `kata-env` Output of "`/opt/kata/bin/kata-runtime kata-env`": ```toml [Meta] Version = "1.0.24" [Runtime] Debug = true Trace = false DisableGuestSeccomp = true DisableNewNetNs = false SandboxCgroupOnly = false Path = "/opt/kata/bin/kata-runtime" [Runtime.Version] OCI = "1.0.1-dev" [Runtime.Version.Version] Semver = "1.11.3" Major = 1 Minor = 11 Patch = 3 Commit = "d6883bdaeeb98a8921ac215533b780d697d6883a" [Runtime.Config] Path = "/etc/kata-containers/configuration.toml" [Hypervisor] MachineType = "pc" Version = "QEMU emulator version 5.0.0 (kata-static)\nCopyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers" Path = "/opt/kata/bin/qemu-virtiofs-system-x86_64" BlockDeviceDriver = "virtio-scsi" EntropySource = "/dev/urandom" SharedFS = "virtio-fs" VirtioFSDaemon = "/opt/kata/bin/virtiofsd" Msize9p = 8192 MemorySlots = 50 PCIeRootPort = 0 HotplugVFIOOnRootBus = false Debug = true UseVSock = true [Image] Path = "/opt/kata/share/kata-containers/kata-containers-image_clearlinux_1.11.3_agent_debug.img" [Kernel] Path = "/opt/kata/share/kata-containers/vmlinuz-virtio-fs-dev-74-virtiofs" Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket scsi_mod.scan=none agent.log=debug agent.debug_console" [Initrd] Path = "" [Proxy] Type = "noProxy" Path = "" Debug = false [Proxy.Version] Semver = "" Major = 0 Minor = 0 Patch = 0 Commit = "" [Shim] Type = "kataShim" Path = "/opt/kata/libexec/kata-containers/kata-shim" Debug = true [Shim.Version] Semver = "1.11.2-5ccc2cdabbb5fed33124c0b87ccecd058f7adc19" Major = 1 Minor = 11 Patch = 2 Commit = "<>" [Agent] Type = "kata" Debug = true Trace = false TraceMode = "" TraceType = "" [Host] Kernel = "4.19.143-flatcar" Architecture = "amd64" VMContainerCapable = true SupportVSocks = true [Host.Distro] Name = "Flatcar Container Linux by Kinvolk" Version = "2512.4.0" [Host.CPU] Vendor = "GenuineIntel" Model = "Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz" [Netmon] Path = "/opt/kata/libexec/kata-containers/kata-netmon" Debug = true Enable = false [Netmon.Version] Semver = "1.11.3" Major = 1 Minor = 11 Patch = 3 Commit = "<>" ``` --- # Runtime config files ## Runtime default config files ``` /etc/kata-containers/configuration.toml /opt/kata/share/defaults/kata-containers/configuration.toml ``` ## Runtime config file contents Output of "`cat "/etc/kata-containers/configuration.toml"`": ```toml # Copyright (c) 2017-2019 Intel Corporation # # SPDX-License-Identifier: Apache-2.0 # # XXX: WARNING: this file is auto-generated. # XXX: # XXX: Source file: "cli/config/configuration-qemu-virtiofs.toml.in" # XXX: Project: # XXX: Name: Kata Containers # XXX: Type: kata [hypervisor.qemu] path = "/opt/kata/bin/qemu-virtiofs-system-x86_64" kernel = "/opt/kata/share/kata-containers/vmlinuz-virtiofs.container" image = "/opt/kata/share/kata-containers/kata-containers.img" machine_type = "pc" # Optional space-separated list of options to pass to the guest kernel. # For example, use `kernel_params = "vsyscall=emulate"` if you are having # trouble running pre-2.15 glibc. # # WARNING: - any parameter specified here will take priority over the default # parameter value of the same name used to start the virtual machine. # Do not set values here unless you understand the impact of doing so as you # may stop the virtual machine from booting. # To see the list of default parameters, enable hypervisor debug, create a # container and look for 'default-kernel-parameters' log entries. kernel_params = "agent.debug_console" # Path to the firmware. # If you want that qemu uses the default firmware leave this option empty firmware = "" # Machine accelerators # comma-separated list of machine accelerators to pass to the hypervisor. # For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"` machine_accelerators="" # Default number of vCPUs per SB/VM: # unspecified or 0 --> will be set to 1 # < 0 --> will be set to the actual number of physical cores # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores default_vcpus = 1 # Default maximum number of vCPUs per SB/VM: # unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # > 0 <= number of physical cores --> will be set to the specified number # > number of physical cores --> will be set to the actual number of physical cores or to the maximum number # of vCPUs supported by KVM if that number is exceeded # WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when # the actual number of physical cores is greater than it. # WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU # the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs # can be added to a SB/VM, but the memory footprint will be big. Another example, with # `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of # vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable, # unless you know what are you doing. default_maxvcpus = 0 # Bridges can be used to hot plug devices. # Limitations: # * Currently only pci bridges are supported # * Until 30 devices per bridge can be hot plugged. # * Until 5 PCI bridges can be cold plugged per VM. # This limitation could be a bug in qemu or in the kernel # Default number of bridges per SB/VM: # unspecified or 0 --> will be set to 1 # > 1 <= 5 --> will be set to the specified number # > 5 --> will be set to 5 default_bridges = 1 # Default memory size in MiB for SB/VM. # If unspecified then it will be set 2048 MiB. default_memory = 2048 # # Default memory slots per SB/VM. # If unspecified then it will be set 10. # This is will determine the times that memory will be hotadded to sandbox/VM. memory_slots = 50 # The size in MiB will be plused to max memory of hypervisor. # It is the memory address space for the NVDIMM devie. # If set block storage driver (block_device_driver) to "nvdimm", # should set memory_offset to the size of block device. # Default 0 #memory_offset = 0 # Disable block device from being used for a container's rootfs. # In case of a storage driver like devicemapper where a container's # root file system is backed by a block device, the block device is passed # directly to the hypervisor for performance reasons. # This flag prevents the block device from being passed to the hypervisor, # 9pfs is used instead to pass the rootfs. disable_block_device_use = false # Shared file system type: # - virtio-fs (default) # - virtio-9p shared_fs = "virtio-fs" # Path to vhost-user-fs daemon. virtio_fs_daemon = "/opt/kata/bin/virtiofsd" # Default size of DAX cache in MiB virtio_fs_cache_size = 4096 # Extra args for virtiofsd daemon # # Format example: # ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"] # # see `virtiofsd -h` for possible options. virtio_fs_extra_args = [] # Cache mode: # # - none # Metadata, data, and pathname lookup are not cached in guest. They are # always fetched from host and any changes are immediately pushed to host. # # - auto # Metadata and pathname lookup cache expires after a configured amount of # time (default is 1 second). Data is cached while the file is open (close # to open consistency). # # - always # Metadata, data, and pathname lookup are cached in guest and never expire. virtio_fs_cache = "auto" # Block storage driver to be used for the hypervisor in case the container # rootfs is backed by a block device. This is virtio-scsi, virtio-blk # or nvdimm. block_device_driver = "virtio-scsi" # Specifies cache-related options will be set to block devices or not. # Default false #block_device_cache_set = true # Specifies cache-related options for block devices. # Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. # Default false #block_device_cache_direct = true # Specifies cache-related options for block devices. # Denotes whether flush requests for the device are ignored. # Default false #block_device_cache_noflush = true # Enable iothreads (data-plane) to be used. This causes IO to be # handled in a separate IO thread. This is currently only implemented # for SCSI. # enable_iothreads = false # Enable pre allocation of VM RAM, default false # Enabling this will result in lower container density # as all of the memory will be allocated and locked # This is useful when you want to reserve all the memory # upfront or in the cases where you want memory latencies # to be very predictable # Default false #enable_mem_prealloc = true # Enable huge pages for VM RAM, default false # Enabling this will result in the VM memory # being allocated using huge pages. # This is useful when you want to use vhost-user network # stacks within the container. This will automatically # result in memory pre allocation #enable_hugepages = true # Enable vhost-user storage device, default false # Enabling this will result in some Linux reserved block type # major range 240-254 being chosen to represent vhost-user devices. enable_vhost_user_store = false # The base directory specifically used for vhost-user devices. # Its sub-path "block" is used for block devices; "block/sockets" is # where we expect vhost-user sockets to live; "block/devices" is where # simulated block device nodes for vhost-user devices to live. vhost_user_store_path = "/var/run/kata-containers/vhost-user" # Enable file based guest memory support. The default is an empty string which # will disable this feature. In the case of virtio-fs, this is enabled # automatically and '/dev/shm' is used as the backing folder. # This option will be ignored if VM templating is enabled. #file_mem_backend = "" # Enable swap of vm memory. Default false. # The behaviour is undefined if mem_prealloc is also set to true #enable_swap = true # This option changes the default hypervisor and kernel parameters # to enable debug output where available. This extra output is added # to the proxy logs, but only when proxy debug is also enabled. # # Default false enable_debug = true # Disable the customizations done in the runtime when it detects # that it is running on top a VMM. This will result in the runtime # behaving as it would when running on bare metal. # #disable_nesting_checks = true # This is the msize used for 9p shares. It is the number of bytes # used for 9p packet payload. #msize_9p = 8192 # If true and vsocks are supported, use vsocks to communicate directly # with the agent and no proxy is started, otherwise use unix # sockets and start a proxy to communicate with the agent. # Default false use_vsock = true # If false and nvdimm is supported, use nvdimm device to plug guest image. # Otherwise virtio-block device is used. # Default false #disable_image_nvdimm = true # VFIO devices are hotplugged on a bridge by default. # Enable hotplugging on root bus. This may be required for devices with # a large PCI bar, as this is a current limitation with hotplugging on # a bridge. This value is valid for "pc" machine type. # Default false #hotplug_vfio_on_root_bus = true # If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off # security (vhost-net runs ring0) for network I/O performance. #disable_vhost_net = true # # Default entropy source. # The path to a host source of entropy (including a real hardware RNG) # /dev/urandom and /dev/random are two main options. # Be aware that /dev/random is a blocking source of entropy. If the host # runs out of entropy, the VMs boot time will increase leading to get startup # timeouts. # The source of entropy /dev/urandom is non-blocking and provides a # generally acceptable source of entropy. It should work well for pretty much # all practical purposes. #entropy_source= "/dev/urandom" # Path to OCI hook binaries in the *guest rootfs*. # This does not affect host-side hooks which must instead be added to # the OCI spec passed to the runtime. # # You can create a rootfs with hooks by customizing the osbuilder scripts: # https://github.com/kata-containers/osbuilder # # Hooks must be stored in a subdirectory of guest_hook_path according to their # hook type, i.e. "guest_hook_path/{prestart,postart,poststop}". # The agent will scan these directories for executable files and add them, in # lexicographical order, to the lifecycle of the guest container. # Hooks are executed in the runtime namespace of the guest. See the official documentation: # https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks # Warnings will be logged if any error is encountered will scanning for hooks, # but it will not abort container execution. #guest_hook_path = "/usr/share/oci/hooks" [factory] # VM templating support. Once enabled, new VMs are created from template # using vm cloning. They will share the same initial kernel, initramfs and # agent memory by mapping it readonly. It helps speeding up new container # creation and saves a lot of memory if there are many kata containers running # on the same host. # # When disabled, new VMs are created from scratch. # # Note: Requires "initrd=" to be set ("image=" is not supported). # # Default false #enable_template = true # Specifies the path of template. # # Default "/run/vc/vm/template" #template_path = "/run/vc/vm/template" # The number of caches of VMCache: # unspecified or == 0 --> VMCache is disabled # > 0 --> will be set to the specified number # # VMCache is a function that creates VMs as caches before using it. # It helps speed up new container creation. # The function consists of a server and some clients communicating # through Unix socket. The protocol is gRPC in protocols/cache/cache.proto. # The VMCache server will create some VMs and cache them by factory cache. # It will convert the VM to gRPC format and transport it when gets # requestion from clients. # Factory grpccache is the VMCache client. It will request gRPC format # VM and convert it back to a VM. If VMCache function is enabled, # kata-runtime will request VM from factory grpccache when it creates # a new sandbox. # # Default 0 #vm_cache_number = 0 # Specify the address of the Unix socket that is used by VMCache. # # Default /var/run/kata-containers/cache.sock #vm_cache_endpoint = "/var/run/kata-containers/cache.sock" [proxy.kata] path = "/opt/kata/libexec/kata-containers/kata-proxy" # If enabled, proxy messages will be sent to the system log # (default: disabled) #enable_debug = true [shim.kata] path = "/opt/kata/libexec/kata-containers/kata-shim" # If enabled, shim messages will be sent to the system log # (default: disabled) enable_debug = true # If enabled, the shim will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # # Note: By default, the shim runs in a separate network namespace. Therefore, # to allow it to send trace details to the Jaeger agent running on the host, # it is necessary to set 'disable_new_netns=true' so that it runs in the host # network namespace. # # (default: disabled) #enable_tracing = true [agent.kata] # If enabled, make the agent display debug-level messages. # (default: disabled) enable_debug = true # Enable agent tracing. # # If enabled, the default trace mode is "dynamic" and the # default trace type is "isolated". The trace mode and type are set # explicity with the `trace_type=` and `trace_mode=` options. # # Notes: # # - Tracing is ONLY enabled when `enable_tracing` is set: explicitly # setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing` # will NOT activate agent tracing. # # - See https://github.com/kata-containers/agent/blob/master/TRACING.md for # full details. # # (default: disabled) #enable_tracing = true # #trace_mode = "dynamic" #trace_type = "isolated" # Comma separated list of kernel modules and their parameters. # These modules will be loaded in the guest kernel using modprobe(8). # The following example can be used to load two kernel modules with parameters # - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"] # The first word is considered as the module name and the rest as its parameters. # Container will not be started when: # * A kernel module is specified and the modprobe command is not installed in the guest # or it fails loading the module. # * The module is not available in the guest or it doesn't met the guest kernel # requirements, like architecture and version. # kernel_modules=[] [netmon] # If enabled, the network monitoring process gets started when the # sandbox is created. This allows for the detection of some additional # network being added to the existing network namespace, after the # sandbox has been created. # (default: disabled) #enable_netmon = true # Specify the path to the netmon binary. path = "/opt/kata/libexec/kata-containers/kata-netmon" # If enabled, netmon messages will be sent to the system log # (default: disabled) enable_debug = true [runtime] # If enabled, the runtime will log additional debug messages to the # system log # (default: disabled) enable_debug = true # # Internetworking model # Determines how the VM should be connected to the # the container network interface # Options: # # - bridged (Deprecated) # Uses a linux bridge to interconnect the container interface to # the VM. Works for most cases except macvlan and ipvlan. # ***NOTE: This feature has been deprecated with plans to remove this # feature in the future. Please use other network models listed below. # # - macvtap # Used when the Container network interface can be bridged using # macvtap. # # - none # Used when customize network. Only creates a tap device. No veth pair. # # - tcfilter # Uses tc filter rules to redirect traffic from the network interface # provided by plugin to a tap interface connected to the VM. # internetworking_model="tcfilter" # disable guest seccomp # Determines whether container seccomp profiles are passed to the virtual # machine and applied by the kata agent. If set to true, seccomp is not applied # within the guest # (default: true) disable_guest_seccomp=true # If enabled, the runtime will create opentracing.io traces and spans. # (See https://www.jaegertracing.io/docs/getting-started). # (default: disabled) #enable_tracing = true # If enabled, the runtime will not create a network namespace for shim and hypervisor processes. # This option may have some potential impacts to your host. It should only be used when you know what you're doing. # `disable_new_netns` conflicts with `enable_netmon` # `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only # with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge # (like OVS) directly. # If you are using docker, `disable_new_netns` only works with `docker run --net=none` # (default: false) #disable_new_netns = true # if enabled, the runtime will add all the kata processes inside one dedicated cgroup. # The container cgroups in the host are not created, just one single cgroup per sandbox. # The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox. # The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation. # The sandbox cgroup is constrained if there is no container type annotation. # See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType sandbox_cgroup_only=false # If enabled, the runtime will not create Kubernetes emptyDir mounts on the guest filesystem. Instead, emptyDir mounts will # be created on the host and shared via 9p. This is far slower, but allows sharing of files from host to guest. disable_guest_empty_dir = true # Enabled experimental feature list, format: ["a", "b"]. # Experimental features are features not stable enough for production, # they may break compatibility, and are prepared for a big version bump. # Supported experimental features: # (default: []) experimental=[] ``` --- # KSM throttler ## version Output of "` --version`": ``` /opt/kata/bin/kata-collect-data.sh: line 178: --version: command not found ``` ## systemd service # Image details ```yaml --- osbuilder: url: "https://github.com/kata-containers/osbuilder" version: "unknown" rootfs-creation-time: "2020-10-14T16:40:39.891958588+0000Z" description: "osbuilder rootfs" file-format-version: "0.0.2" architecture: "x86_64" base-distro: name: "Clear" version: "33830" packages: default: - "chrony" - "iptables-bin" - "kmod-bin" - "libudev0-shim" - "systemd" - "util-linux-bin" extra: - "bash" - "coreutils" agent: url: "https://github.com/kata-containers/agent" name: "kata-agent" version: "1.12.0-alpha1-f9eab0fe9adb34e4f9f4a11f42a3eff983fd0659" agent-is-init-daemon: "no" ``` --- # Initrd details No initrd --- # Logfiles ## Runtime logs No recent runtime problems found in system journal. ## Proxy logs No recent proxy problems found in system journal. ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details Have `docker` ## Docker Output of "`docker version`": Removed, Docker isn't being used. Output of "`docker info`": Removed, Docker isn't being used. Output of "`systemctl show docker`": Removed, Docker isn't being used. Have `kubectl` ## Kubernetes Output of "`kubectl version`": ``` Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Error from server (NotFound): the server could not find the requested resource ``` Output of "`kubectl config view`": ``` apiVersion: v1 clusters: null contexts: null current-context: "" kind: Config preferences: {} users: null ``` Output of "`systemctl show kubelet`": ``` Type=simple Restart=on-failure NotifyAccess=none RestartUSec=5s TimeoutStartUSec=1min 30s TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=2522 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Wed 2020-10-14 15:45:59 UTC ExecMainStartTimestampMonotonic=116443306 ExecMainExitTimestampMonotonic=0 ExecMainPID=2522 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/bin/bash ; argv[]=/bin/bash -c if [[ $(/bin/mount | /bin/grep /sys/fs/bpf -c) -eq 0 ]]; then /bin/mount bpffs /sys/fs/bpf -t bpf; fi ; ignore_errors=no ; start_time=[Wed 2020-10-14 15:45:59 UTC] ; stop_time=[Wed 2020-10-14 15:45:59 UTC] ; pid=2511 ; code=exited ; status=0 } ExecStartPre={ path=/bin/bash ; argv[]=/bin/bash -c until [[ $(hostname) != 'localhost' ]]; do sleep 1; done ; ignore_errors=no ; start_time=[Wed 2020-10-14 15:45:59 UTC] ; stop_time=[Wed 2020-10-14 15:45:59 UTC] ; pid=2516 ; code=exited ; status=0 } ExecStartPre={ path=/bin/bash ; argv[]=/bin/bash /opt/ethos/bin/kubelet-master-setup.sh ; ignore_errors=yes ; start_time=[Wed 2020-10-14 15:45:59 UTC] ; stop_time=[Wed 2020-10-14 15:45:59 UTC] ; pid=2520 ; code=exited ; status=127 } ExecStart={ path=/opt/bin/kubelet ; argv[]=/opt/bin/kubelet --cert-dir=/etc/kubernetes/certs --config=/etc/kubernetes/kubelet.yaml --image-pull-progress-deadline=10m --kubeconfig=/etc/kubernetes/kubeconfig/kubelet.kubeconfig --network-plugin=cni --root-dir=/var/lib/kubelet --v=2 $KUBELET_ARGS ; ignore_errors=no ; start_time=[Wed 2020-10-14 15:45:59 UTC] ; stop_time=[n/a] ; pid=2522 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/kubelet.service MemoryCurrent=273403904 CPUUsageNSec=821967261120 TasksCurrent=46 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=no CPUAccounting=yes CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=32767 IPAccounting=no EnvironmentFiles=/run/ethos/kubelet-args (ignore_errors=no) UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=524288 LimitNOFILESoft=1024 LimitAS=infinity LimitASSoft=infinity LimitNPROC=257504 LimitNPROCSoft=257504 LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=257504 LimitSIGPENDINGSoft=257504 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=0 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=process KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=kubelet.service Names=kubelet.service Requires=sysinit.target download-certificates.service crio.service configure-docker.service docker.service system.slice configure-kubelet.service coreos-metadata.service Wants=configure-kubelet.service WantedBy=multi-user.target Conflicts=shutdown.target Before=multi-user.target shutdown.target After=nvidia-driver.service systemd-journald.socket docker.service configure-docker.service basic.target sysinit.target mnt-nvme.mount coreos-metadata.service configure-kubelet.service system.slice download-certificates.service crio.service Description=Kubernetes Kubelet LoadState=loaded ActiveState=active SubState=running FragmentPath=/etc/systemd/system/kubelet.service DropInPaths=/etc/systemd/system/kubelet.service.d/11-ecr-credentials.conf UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Wed 2020-10-14 15:45:59 UTC StateChangeTimestampMonotonic=116443375 InactiveExitTimestamp=Wed 2020-10-14 15:45:59 UTC InactiveExitTimestampMonotonic=116419567 ActiveEnterTimestamp=Wed 2020-10-14 15:45:59 UTC ActiveEnterTimestampMonotonic=116443375 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=yes CanStop=yes CanReload=no CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Wed 2020-10-14 15:45:59 UTC ConditionTimestampMonotonic=116417622 AssertTimestamp=Wed 2020-10-14 15:45:59 UTC AssertTimestampMonotonic=116417622 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 InvocationID=bc67b4c0688e414f9dd551bced33f50f CollectMode=inactive ``` Have `crio` ## crio Output of "`crio --version`": ``` crio version 1.17.5 commit: "6b97f815cfbdf680a7ddf2435291fb7e49776ef1" ``` Output of "`systemctl show crio`": ``` Type=notify Restart=always NotifyAccess=main RestartUSec=100ms TimeoutStartUSec=infinity TimeoutStopUSec=1min 30s RuntimeMaxUSec=infinity WatchdogUSec=0 WatchdogTimestampMonotonic=0 RootDirectoryStartOnly=no RemainAfterExit=no GuessMainPID=yes MainPID=2291 ControlPID=0 FileDescriptorStoreMax=0 NFileDescriptorStore=0 StatusErrno=0 Result=success UID=[not set] GID=[not set] NRestarts=0 ExecMainStartTimestamp=Wed 2020-10-14 15:45:02 UTC ExecMainStartTimestampMonotonic=59432787 ExecMainExitTimestampMonotonic=0 ExecMainPID=2291 ExecMainCode=0 ExecMainStatus=0 ExecStartPre={ path=/opt/ethos/bin/crio-setup.sh ; argv[]=/opt/ethos/bin/crio-setup.sh ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } ExecStart={ path=/opt/bin/crio ; argv[]=/opt/bin/crio $CRIO_FLAGS ; ignore_errors=no ; start_time=[Wed 2020-10-14 15:45:02 UTC] ; stop_time=[n/a] ; pid=2291 ; code=(null) ; status=0/0 } ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 } Slice=system.slice ControlGroup=/system.slice/crio.service MemoryCurrent=7588810752 CPUUsageNSec=512158800560 TasksCurrent=239 IPIngressBytes=18446744073709551615 IPIngressPackets=18446744073709551615 IPEgressBytes=18446744073709551615 IPEgressPackets=18446744073709551615 Delegate=no CPUAccounting=yes CPUWeight=[not set] StartupCPUWeight=[not set] CPUShares=[not set] StartupCPUShares=[not set] CPUQuotaPerSecUSec=infinity IOAccounting=no IOWeight=[not set] StartupIOWeight=[not set] BlockIOAccounting=no BlockIOWeight=[not set] StartupBlockIOWeight=[not set] MemoryAccounting=yes MemoryMin=0 MemoryLow=0 MemoryHigh=infinity MemoryMax=infinity MemorySwapMax=infinity MemoryLimit=infinity DevicePolicy=auto TasksAccounting=yes TasksMax=infinity IPAccounting=no EnvironmentFiles=/etc/crio/crio.env (ignore_errors=no) UMask=0022 LimitCPU=infinity LimitCPUSoft=infinity LimitFSIZE=infinity LimitFSIZESoft=infinity LimitDATA=infinity LimitDATASoft=infinity LimitSTACK=infinity LimitSTACKSoft=8388608 LimitCORE=infinity LimitCORESoft=infinity LimitRSS=infinity LimitRSSSoft=infinity LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitAS=infinity LimitASSoft=infinity LimitNPROC=1048576 LimitNPROCSoft=1048576 LimitMEMLOCK=65536 LimitMEMLOCKSoft=65536 LimitLOCKS=infinity LimitLOCKSSoft=infinity LimitSIGPENDING=257504 LimitSIGPENDINGSoft=257504 LimitMSGQUEUE=819200 LimitMSGQUEUESoft=819200 LimitNICE=0 LimitNICESoft=0 LimitRTPRIO=0 LimitRTPRIOSoft=0 LimitRTTIME=infinity LimitRTTIMESoft=infinity OOMScoreAdjust=-999 Nice=0 IOSchedulingClass=0 IOSchedulingPriority=0 CPUSchedulingPolicy=0 CPUSchedulingPriority=0 TimerSlackNSec=50000 CPUSchedulingResetOnFork=no NonBlocking=no StandardInput=null StandardInputData= StandardOutput=journal StandardError=inherit TTYReset=no TTYVHangup=no TTYVTDisallocate=no SyslogPriority=30 SyslogLevelPrefix=yes SyslogLevel=6 SyslogFacility=3 LogLevelMax=-1 LogRateLimitIntervalUSec=0 LogRateLimitBurst=0 SecureBits=0 CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend AmbientCapabilities= DynamicUser=no RemoveIPC=no MountFlags= PrivateTmp=no PrivateDevices=no ProtectKernelTunables=no ProtectKernelModules=no ProtectControlGroups=no PrivateNetwork=no PrivateUsers=no PrivateMounts=no ProtectHome=no ProtectSystem=no SameProcessGroup=no UtmpMode=init IgnoreSIGPIPE=yes NoNewPrivileges=no SystemCallErrorNumber=0 LockPersonality=no RuntimeDirectoryPreserve=no RuntimeDirectoryMode=0755 StateDirectoryMode=0755 CacheDirectoryMode=0755 LogsDirectoryMode=0755 ConfigurationDirectoryMode=0755 MemoryDenyWriteExecute=no RestrictRealtime=no RestrictNamespaces=no MountAPIVFS=no KeyringMode=private KillMode=control-group KillSignal=15 FinalKillSignal=9 SendSIGKILL=yes SendSIGHUP=no WatchdogSignal=6 Id=crio.service Names=crio.service Requires=system.slice cri-logging-driver-watch.service sysinit.target network-online.target lvm2-lvmetad.service RequiredBy=kubelet.service WantedBy=crio-shutdown.service multi-user.target Conflicts=shutdown.target Before=nvidia-driver.service kubelet.service crio-shutdown.service multi-user.target shutdown.target After=system.slice cri-logging-driver-watch.service systemd-journald.socket sysinit.target basic.target network-online.target lvm2-lvmetad.service Documentation=https://github.com/kubernetes-sigs/cri-o/blob/master/contrib/systemd/crio.service Description=Open Container Initiative Daemon LoadState=loaded ActiveState=active SubState=running FragmentPath=/etc/systemd/system/crio.service UnitFileState=enabled UnitFilePreset=enabled StateChangeTimestamp=Wed 2020-10-14 15:45:04 UTC StateChangeTimestampMonotonic=61321291 InactiveExitTimestamp=Wed 2020-10-14 15:44:45 UTC InactiveExitTimestampMonotonic=41756068 ActiveEnterTimestamp=Wed 2020-10-14 15:45:04 UTC ActiveEnterTimestampMonotonic=61321291 ActiveExitTimestampMonotonic=0 InactiveEnterTimestampMonotonic=0 CanStart=yes CanStop=yes CanReload=yes CanIsolate=no StopWhenUnneeded=no RefuseManualStart=no RefuseManualStop=no AllowIsolate=no DefaultDependencies=yes OnFailureJobMode=replace IgnoreOnIsolate=no NeedDaemonReload=no JobTimeoutUSec=infinity JobRunningTimeoutUSec=infinity JobTimeoutAction=none ConditionResult=yes AssertResult=yes ConditionTimestamp=Wed 2020-10-14 15:44:45 UTC ConditionTimestampMonotonic=41754284 AssertTimestamp=Wed 2020-10-14 15:44:45 UTC AssertTimestampMonotonic=41754285 Transient=no Perpetual=no StartLimitIntervalUSec=10s StartLimitBurst=5 StartLimitAction=none FailureAction=none FailureActionExitStatus=-1 SuccessAction=none SuccessActionExitStatus=-1 InvocationID=48c280798e28437b8adaae4cf51d03d9 CollectMode=inactive ``` Output of "`cat /etc/crio/crio.conf`": ``` # The CRI-O configuration file specifies all of the available configuration # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime # daemon, but in a TOML format that can be more easily modified and versioned. # # Please refer to crio.conf(5) for details of all configuration options. # CRI-O supports partial configuration reload during runtime, which can be # done by sending SIGHUP to the running process. Currently supported options # are explicitly mentioned with: 'This option supports live configuration # reload'. # CRI-O reads its storage defaults from the containers-storage.conf(5) file # located at /etc/containers/storage.conf. Modify this storage configuration if # you want to change the system's defaults. If you want to modify storage just # for CRI-O, you can change the storage configuration options here. [crio] # Path to the "root directory". CRI-O stores all of its data, including # containers images, in this directory. #root = "/home/sascha/.local/share/containers/storage" # Path to the "run directory". CRI-O stores all of its state in this directory. #runroot = "/tmp/1000" # Storage driver used to manage the storage of images and containers. Please # refer to containers-storage.conf(5) to see all available storage drivers. #storage_driver = "vfs" # List to pass options to the storage driver. Please refer to # containers-storage.conf(5) to see all available storage options. #storage_option = [ #] # If set to false, in-memory locking will be used instead of file-based locking. # **Deprecated** this option will be removed in the future. file_locking = false # Path to the lock file. # **Deprecated** this option will be removed in the future. file_locking_path = "/run/crio.lock" # The crio.api table contains settings for the kubelet/gRPC interface. [crio.api] # Path to AF_LOCAL socket on which CRI-O will listen. listen = "/var/run/crio/crio.sock" # IP address on which the stream server will listen. stream_address = "127.0.0.1" # The port on which the stream server will listen. stream_port = "0" # Enable encrypted TLS transport of the stream server. stream_enable_tls = false # Path to the x509 certificate file used to serve the encrypted stream. This # file can change, and CRI-O will automatically pick up the changes within 5 # minutes. stream_tls_cert = "" # Path to the key file used to serve the encrypted stream. This file can # change, and CRI-O will automatically pick up the changes within 5 minutes. stream_tls_key = "" # Path to the x509 CA(s) file used to verify and authenticate client # communication with the encrypted stream. This file can change, and CRI-O will # automatically pick up the changes within 5 minutes. stream_tls_ca = "" # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024. grpc_max_send_msg_size = 16777216 # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024. grpc_max_recv_msg_size = 16777216 # The crio.runtime table contains settings pertaining to the OCI runtime used # and options for how to set up and manage the OCI runtime. [crio.runtime] # A list of ulimits to be set in containers by default, specified as # "=:", for example: # "nofile=1024:2048" # If nothing is set here, settings will be inherited from the CRI-O daemon #default_ulimits = [ #] # default_runtime is the _name_ of the OCI runtime to be used as the default. # The name is matched against the runtimes map below. default_runtime = "runc" # If true, the runtime will not use pivot_root, but instead use MS_MOVE. no_pivot = false # Path to the conmon binary, used for monitoring the OCI runtime. conmon = "/opt/bin/conmon" # Cgroup setting for conmon conmon_cgroup = "pod" # Environment variable list for the conmon process, used for passing necessary # environment variables to conmon or the runtime. conmon_env = [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", ] # If true, SELinux will be used for pod separation on the host. selinux = false # Path to the seccomp.json profile which is used as the default seccomp profile # for the runtime. If not specified, then the internal default seccomp profile # will be used. seccomp_profile = "/etc/crio/seccomp.json" # Used to change the name of the default AppArmor profile of CRI-O. The default # profile name is "crio-default-" followed by the version string of CRI-O. apparmor_profile = "crio-default" # Cgroup management implementation used for the runtime. cgroup_manager = "cgroupfs" # List of default capabilities for containers. If it is empty or commented out, # only the capabilities defined in the containers json file by the user/kube # will be added. # default_capabilities = [ # "CHOWN", # "DAC_OVERRIDE", # "FSETID", # "FOWNER", # "NET_RAW", # "SETGID", # "SETUID", # "SETPCAP", # "NET_BIND_SERVICE", # "SYS_CHROOT", # "KILL", # ] # List of default sysctls. If it is empty or commented out, only the sysctls # defined in the container json file by the user/kube will be added. default_sysctls = [ ] # List of additional devices. specified as # "::", for example: "--device=/dev/sdc:/dev/xvdc:rwm". #If it is empty or commented out, only the devices # defined in the container json file by the user/kube will be added. additional_devices = [ ] # Path to OCI hooks directories for automatically executed hooks. hooks_dir = [ "/etc/containers/oci/hooks.d" ] # List of default mounts for each container. **Deprecated:** this option will # be removed in future versions in favor of default_mounts_file. default_mounts = [ ] # Path to the file specifying the defaults mounts for each container. The # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads # its default mounts from the following two files: # # 1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the # override file, where users can either add in their own default mounts, or # override the default mounts shipped with the package. # # 2) /usr/share/containers/mounts.conf: This is the default file read for # mounts. If you want CRI-O to read from a different, specific mounts file, # you can change the default_mounts_file. Note, if this is done, CRI-O will # only add mounts it finds in this file. # #default_mounts_file = "" # Maximum number of processes allowed in a container. pids_limit = 1024 # Maximum sized allowed for the container log file. Negative numbers indicate # that no size limit is imposed. If it is positive, it must be >= 8192 to # match/exceed conmon's read buffer. The file is truncated and re-opened so the # limit is never exceeded. log_size_max = -1 # Whether container output should be logged to journald in addition to the kuberentes log file log_to_journald = false # Path to directory in which container exit files are written to by conmon. container_exits_dir = "/var/run/crio/exits" # Path to directory for container attach sockets. container_attach_socket_dir = "/var/run/crio" # If set to true, all containers will run in read-only mode. read_only = false # Changes the verbosity of the logs based on the level it is set to. Options # are fatal, panic, error, warn, info, and debug. This option supports live # configuration reload. log_level = "error" # The default log directory where all logs will go unless directly specified by the kubelet log_dir = "/var/log/crio/pods" # The UID mappings for the user namespace of each container. A range is # specified in the form containerUID:HostUID:Size. Multiple ranges must be # separated by comma. uid_mappings = "" # The GID mappings for the user namespace of each container. A range is # specified in the form containerGID:HostGID:Size. Multiple ranges must be # separated by comma. gid_mappings = "" # The minimal amount of time in seconds to wait before issuing a timeout # regarding the proper termination of the container. ctr_stop_timeout = 0 # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes. # The runtime to use is picked based on the runtime_handler provided by the CRI. # If no runtime_handler is provided, the runtime will be picked based on the level # of trust of the workload. # ManageNetworkNSLifecycle determines whether we pin and remove network namespace # and manage its lifecycle. manage_network_ns_lifecycle = true [crio.runtime.runtimes.runc] runtime_path = "/usr/bin/runc" runtime_type = "oci" [crio.runtime.runtimes.kata-qemu] runtime_path = "/opt/kata/bin/containerd-shim-kata-v2" runtime_type = "vm" # The crio.image table contains settings pertaining to the management of OCI images. # # CRI-O reads its configured registries defaults from the system wide # containers-registries.conf(5) located in /etc/containers/registries.conf. If # you want to modify just CRI-O, you can change the registries configuration in # this file. Otherwise, leave insecure_registries and registries commented out to # use the system's defaults from /etc/containers/registries.conf. [crio.image] # Default transport for pulling images from a remote container storage. default_transport = "docker://" # The path to a file containing credentials necessary for pulling images from # secure registries. The file is similar to that of /var/lib/kubelet/config.json global_auth_file = "" # The image used to instantiate infra containers. # This option supports live configuration reload. pause_image = "k8s.gcr.io/pause:3.2" # The path to a file containing credentials specific for pulling the pause_image from # above. The file is similar to that of /var/lib/kubelet/config.json # This option supports live configuration reload. pause_image_auth_file = "" # The command to run to have a container stay in the paused state. # This option supports live configuration reload. pause_command = "/pause" # Path to the file which decides what sort of policy we use when deciding # whether or not to trust an image that we've pulled. It is not recommended that # this option be used, as the default behavior of using the system-wide default # policy (i.e., /etc/containers/policy.json) is most often preferred. Please # refer to containers-policy.json(5) for more details. signature_policy = "" # Controls how image volumes are handled. The valid values are mkdir, bind and # ignore; the latter will ignore volumes entirely. image_volumes = "mkdir" # List of registries to be used when pulling an unqualified image (e.g., # "alpine:latest"). By default, registries is set to "docker.io" for # compatibility reasons. Depending on your workload and usecase you may add more # registries (e.g., "quay.io", "registry.fedoraproject.org", # "registry.opensuse.org", etc.). registries = [ "docker.io" ] # The crio.network table containers settings pertaining to the management of # CNI plugins. [crio.network] # Path to the directory where CNI configuration files are located. network_dir = "/etc/cni/net.d/" # Paths to directories where CNI plugin binaries are located. plugin_dirs = [ "/opt/cni/bin/", ] ``` Have `containerd` ## containerd Output of "`containerd --version`": Removed, containerd isn't being used. Output of "`systemctl show containerd`": Removed, containerd isn't being used. Output of "`cat /etc/containerd/config.toml`": ``` cat: /etc/containerd/config.toml: No such file or directory ``` --- # Packages No `dpkg` No `rpm` ---

evanfoster commented 3 years ago

Woah, this might not be QEMU 5.0 related, actually. I'm able to reproduce the same issue with the officially supported version of QEMU for 1.11.3:

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 4.1.0 (kata-static)\nCopyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers"
  Path = "/opt/kata/bin/qemu-virtiofs-system-x86_64"
  BlockDeviceDriver = "virtio-scsi"
  EntropySource = "/dev/urandom"
  SharedFS = "virtio-fs"
  VirtioFSDaemon = "/opt/kata/bin/virtiofsd"
  Msize9p = 8192
  MemorySlots = 50
  PCIeRootPort = 0
  HotplugVFIOOnRootBus = false
  Debug = true
  UseVSock = true
egernst commented 3 years ago

hey @evanfoster the kernel change wasn't back ported to 1.11. TAL @ https://github.com/kata-containers/packaging/commit/c2023a217d57adbf3c74f62f35f962badcfb60ee

evanfoster commented 3 years ago

Looks like that was it! I rebuilt vmlinuz-kata-v5.6-april-09-2020-75-virtiofs with an additional config snippet and I'm now seeing chronyd up and going in the container.

We'll probably want to backport this to 1.11, right?

fidencio commented 3 years ago

Forward port to 2.0-dev: https://github.com/kata-containers/packaging/pull/1159 Backport to 1.11: https://github.com/kata-containers/packaging/pull/1160

evanfoster commented 3 years ago

Closing this, since all of the port PRs are merged or closed.