smartxworks / cluster-api-provider-virtink

Kubernetes Cluster API Provider Virtink
Apache License 2.0
25 stars 2 forks source link

cloud-init of СP VM is succeed, but CAPI says kubeadm is not initialized #34

Closed Stringls closed 1 year ago

Stringls commented 1 year ago

/kind bug

What I am trying to do and what is happenning

Hi! I deploy an external virtink cluster on BM Hetzner cluster (baremetal node with KVM) from GKE mgmt cluster.

The service virtink is created. The CP VM is created on the external cluster with the below logs.

Defaulted container "cloud-hypervisor" out of: cloud-hypervisor, init-kernel (init), init-volume-rootfs (init), init-volume-cloud-init (init)
cloud-hypervisor: 76.05812ms: <vcpu0> WARN:devices/src/legacy/debug_port.rs:76 -- [Debug I/O port: Kernel code 0x40] 0.075365 seconds
[    0.000000] Linux version 5.15.12+ (root@buildkitsandbox) (gcc (GCC) 11.3.1 20220421 (Red Hat 11.3.1-2), GNU ld version 2.37-25.fc35) #1 SMP Fri Sep 23 08:39:55 UTC 2022
[    0.000000] Command line: console=ttyS0 root=/dev/vda rw
[    0.000000] [Firmware Bug]: TSC doesn't count with P0 frequency!
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
[    0.000000] x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
[    0.000000] signal: max sigframe size: 2976
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
[    0.000000] BIOS-e820: [mem 0x00000000000a0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bfffffff] usable
[    0.000000] BIOS-e820: [mem 0x00000000e8000000-0x00000000f7ffffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 3.2.0 present.
[    0.000000] DMI: Cloud Hypervisor cloud-hypervisor, BIOS 0 
[    0.000000] Hypervisor detected: KVM
[    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[    0.000000] kvm-clock: cpu 0, msr 3001001, primary cpu clock
[    0.000003] kvm-clock: using sched offset of 77739140 cycles
[    0.000012] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[    0.000026] tsc: Detected 3393.624 MHz processor
[    0.000130] last_pfn = 0x140000 max_arch_pfn = 0x400000000
[    0.000376] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
[    0.000539] last_pfn = 0xc0000 max_arch_pfn = 0x400000000
[    0.000656] found SMP MP-table at [mem 0x000f0090-0x000f009f]
[    0.000807] Using GB pages for direct mapping
[    0.001107] ACPI: Early table checksum verification disabled
[    0.001144] ACPI: RSDP 0x00000000000A0000 000024 (v02 CLOUDH)
[    0.001159] ACPI: XSDT 0x00000000000A1538 00003C (v01 CLOUDH CHXSDT   00000001 CLDH 00000000)
[    0.001173] ACPI: FACP 0x00000000000A1396 000114 (v06 CLOUDH CHFACP   00000001 CLDH 00000000)
[    0.001181] ACPI: DSDT 0x00000000000A0024 001372 (v06 CLOUDH CHDSDT   00000001 CLDH 00000000)
[    0.001184] ACPI: APIC 0x00000000000A14AA 000052 (v05 CLOUDH CHMADT   00000001 CLDH 00000000)
[    0.001187] ACPI: MCFG 0x00000000000A14FC 00003C (v01 CLOUDH CHMCFG   00000001 CLDH 00000000)
[    0.001189] ACPI: Reserving FACP table memory at [mem 0xa1396-0xa14a9]
[    0.001191] ACPI: Reserving DSDT table memory at [mem 0xa0024-0xa1395]
[    0.001192] ACPI: Reserving APIC table memory at [mem 0xa14aa-0xa14fb]
[    0.001193] ACPI: Reserving MCFG table memory at [mem 0xa14fc-0xa1537]
[    0.001333] No NUMA configuration found
[    0.001336] Faking a node at [mem 0x0000000000000000-0x000000013fffffff]
[    0.001348] NODE_DATA(0) allocated [mem 0x13ffd6000-0x13fffffff]
[    0.001656] Zone ranges:
[    0.001661]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.001665]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.001669]   Normal   [mem 0x0000000100000000-0x000000013fffffff]
[    0.001670]   Device   empty
[    0.001673] Movable zone start for each node
[    0.001680] Early memory node ranges
[    0.001681]   node   0: [mem 0x0000000000001000-0x000000000009ffff]
[    0.001682]   node   0: [mem 0x0000000000100000-0x00000000bfffffff]
[    0.001684]   node   0: [mem 0x0000000100000000-0x000000013fffffff]
[    0.001687] Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff]
[    0.001875] On node 0, zone DMA: 1 pages in unavailable ranges
[    0.002001] On node 0, zone DMA: 96 pages in unavailable ranges
[    0.029524] ACPI: PM-Timer IO Port: 0x608
[    0.029612] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[    0.029618] ACPI: INT_SRC_OVR (bus 0 bus_irq 4 global_irq 4 dfl dfl)
[    0.029624] ACPI: Using ACPI (MADT) for SMP configuration information
[    0.029633] TSC deadline timer available
[    0.029641] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.029706] kvm-guest: KVM setup pv remote TLB flush
[    0.029711] kvm-guest: setup PV sched yield
[    0.029742] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
[    0.029747] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000fffff]
[    0.029748] PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xe7ffffff]
[    0.029750] PM: hibernation: Registered nosave memory: [mem 0xe8000000-0xf7ffffff]
[    0.029753] PM: hibernation: Registered nosave memory: [mem 0xf8000000-0xffffffff]
[    0.029756] [mem 0xc0000000-0xe7ffffff] available for PCI devices
[    0.029759] Booting paravirtualized kernel on KVM
[    0.029776] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
[    0.029790] setup_percpu: NR_CPUS:128 nr_cpumask_bits:128 nr_cpu_ids:2 nr_node_ids:1
[    0.031139] percpu: Embedded 49 pages/cpu s163840 r8192 d28672 u1048576
[    0.031190] kvm-guest: stealtime: cpu 0, msr 13bc27080
[    0.031194] kvm-guest: PV spinlocks enabled
[    0.031197] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
[    0.031243] Built 1 zonelists, mobility grouping on.  Total pages: 1031936
[    0.031245] Policy zone: Normal
[    0.031247] Kernel command line: console=ttyS0 root=/dev/vda rw
[    0.033744] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
[    0.034447] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[    0.034472] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.066723] Memory: 4019544K/4193916K available (12294K kernel code, 7988K rwdata, 3240K rodata, 1648K init, 6640K bss, 174116K reserved, 0K cma-reserved)
[    0.066740] random: get_random_u64 called from kmem_cache_open+0x20/0x300 with crng_init=0
[    0.066804] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.067092] rcu: Hierarchical RCU implementation.
[    0.067094] rcu:     RCU restricting CPUs from NR_CPUS=128 to nr_cpu_ids=2.
[    0.067097]  Tracing variant of Tasks RCU enabled.
[    0.067099] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[    0.067100] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
[    0.067106] NR_IRQS: 8448, nr_irqs: 424, preallocated irqs: 0
[    0.067300] Console: colour dummy device 80x25
[    0.092712] printk: console [ttyS0] enabled
[    0.092908] ACPI: Core revision 20210730
[    0.093123] APIC: Switch to symmetric I/O mode setup
[    0.093529] x2apic enabled
[    0.094018] Switched APIC routing to physical x2apic.
[    0.094235] kvm-guest: setup PV IPIs
[    0.094423] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x30eac774559, max_idle_ns: 440795212956 ns
[    0.094885] Calibrating delay loop (skipped) preset value.. 6787.24 BogoMIPS (lpj=13574496)
[    0.095242] pid_max: default: 32768 minimum: 301
[    0.095501] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
[    0.095861] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
[    0.097471] Last level iTLB entries: 4KB 512, 2MB 512, 4MB 256
[    0.097755] Last level dTLB entries: 4KB 2048, 2MB 2048, 4MB 1024, 1GB 0
[    0.098096] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    0.098462] Spectre V2 : Mitigation: Full AMD retpoline
[    0.098685] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    0.098869] Spectre V2 : Enabling Restricted Speculation for firmware calls
[    0.098869] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[    0.098869] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
[    0.098869] Freeing SMP alternatives memory: 36K
[    0.098869] smpboot: CPU0: AMD Ryzen 9 5950X 16-Core Processor (family: 0x19, model: 0x21, stepping: 0x0)
[    0.098869] Performance Events: Fam17h+ core perfctr, AMD PMU driver.
[    0.098869] ... version:                0
[    0.098869] ... bit width:              48
[    0.098869] ... generic registers:      6
[    0.098869] ... value mask:             0000ffffffffffff
[    0.098869] ... max period:             00007fffffffffff
[    0.098869] ... fixed-purpose events:   0
[    0.098869] ... event mask:             000000000000003f
[    0.098869] rcu: Hierarchical SRCU implementation.
[    0.098869] smp: Bringing up secondary CPUs ...
[    0.098869] x86: Booting SMP configuration:
[    0.098869] .... node  #0, CPUs:      #1
[    0.031707] kvm-clock: cpu 1, msr 3001041, secondary cpu clock
[    0.099045] kvm-guest: stealtime: cpu 1, msr 13bd27080
[    0.099881] smp: Brought up 1 node, 2 CPUs
[    0.099881] smpboot: Max logical packages: 1
[    0.099881] smpboot: Total of 2 processors activated (13574.49 BogoMIPS)
[    0.104610] devtmpfs: initialized
[    0.104610] x86/mm: Memory block size: 128MB
[    0.104610] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[    0.106881] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
[    0.107839] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.108389] DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
[    0.108993] DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
[    0.109648] DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
[    0.110371] audit: initializing netlink subsys (disabled)
[    0.110893] audit: type=2000 audit(1680189806.065:1): state=initialized audit_enabled=0 res=1
[    0.110997] thermal_sys: Registered thermal governor 'fair_share'
[    0.111359] thermal_sys: Registered thermal governor 'step_wise'
[    0.111684] thermal_sys: Registered thermal governor 'user_space'
[    0.112013] cpuidle: using governor ladder
[    0.112013] cpuidle: using governor menu
[    0.112013] ACPI: bus type PCI registered
[    0.112013] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    0.112013] PCI: MMCONFIG for domain 0000 [bus 00-00] at [mem 0xe8000000-0xe80fffff] (base 0xe8000000)
[    0.112013] PCI: MMCONFIG at [mem 0xe8000000-0xe80fffff] reserved in E820
[    0.112130] PCI: Using configuration type 1 for base access
[    0.115617] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[    0.115617] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.115617] cryptd: max_cpu_qlen set to 1000
[    0.119121] ACPI: Added _OSI(Module Device)
[    0.119153] ACPI: Added _OSI(Processor Device)
[    0.119407] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.119678] ACPI: Added _OSI(Processor Aggregator Device)
[    0.119988] ACPI: Added _OSI(Linux-Dell-Video)
[    0.120245] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[    0.120552] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[    0.121122] ACPI: 1 ACPI AML tables successfully acquired and loaded
[    0.122990] ACPI: Interpreter enabled
[    0.123170] ACPI: PM: (supports S0 S5)
[    0.123345] ACPI: Using IOAPIC for interrupt routing
[    0.123581] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.124684] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00])
[    0.124967] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
[    0.125385] acpi PNP0A08:00: _OSC: platform retains control of PCIe features (AE_NOT_FOUND)
[    0.125804] acpiphp: Slot [0] registered
[    0.125997] acpiphp: Slot [1] registered
[    0.126199] acpiphp: Slot [2] registered
[    0.126389] acpiphp: Slot [3] registered
[    0.126578] acpiphp: Slot [4] registered
[    0.126768] acpiphp: Slot [5] registered
[    0.126884] acpiphp: Slot [6] registered
[    0.127073] acpiphp: Slot [7] registered
[    0.127262] acpiphp: Slot [8] registered
[    0.127453] acpiphp: Slot [9] registered
[    0.127642] acpiphp: Slot [10] registered
[    0.127833] acpiphp: Slot [11] registered
[    0.128023] acpiphp: Slot [12] registered
[    0.128211] acpiphp: Slot [13] registered
[    0.128399] acpiphp: Slot [14] registered
[    0.128587] acpiphp: Slot [15] registered
[    0.128775] acpiphp: Slot [16] registered
[    0.128961] acpiphp: Slot [17] registered
[    0.129146] acpiphp: Slot [18] registered
[    0.129336] acpiphp: Slot [19] registered
[    0.129523] acpiphp: Slot [20] registered
[    0.129711] acpiphp: Slot [21] registered
[    0.129899] acpiphp: Slot [22] registered
[    0.130096] acpiphp: Slot [23] registered
[    0.130293] acpiphp: Slot [24] registered
[    0.130487] acpiphp: Slot [25] registered
[    0.130682] acpiphp: Slot [26] registered
[    0.130877] acpiphp: Slot [27] registered
[    0.131077] acpiphp: Slot [28] registered
[    0.131274] acpiphp: Slot [29] registered
[    0.131471] acpiphp: Slot [30] registered
[    0.131666] acpiphp: Slot [31] registered
[    0.131869] PCI host bridge to bus 0000:00
[    0.132060] pci_bus 0000:00: root bus resource [mem 0xc0000000-0xe7ffffff window]
[    0.132404] pci_bus 0000:00: root bus resource [mem 0x140000000-0x7ff3fffffff window]
[    0.132760] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    0.133077] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    0.133390] pci_bus 0000:00: root bus resource [bus 00]
[    0.133706] pci 0000:00:00.0: [8086:0d57] type 00 class 0x060000
[    0.134274] pci 0000:00:01.0: [1af4:1043] type 00 class 0xffff00
[    0.134602] pci 0000:00:01.0: reg 0x10: [mem 0x7ff3ff80000-0x7ff3fffffff 64bit]
[    0.135259] pci 0000:00:02.0: [1af4:1042] type 00 class 0x018000
[    0.135573] pci 0000:00:02.0: reg 0x10: [mem 0xe7f80000-0xe7ffffff]
[    0.136263] pci 0000:00:03.0: [1af4:1042] type 00 class 0x018000
[    0.136572] pci 0000:00:03.0: reg 0x10: [mem 0xe7f00000-0xe7f7ffff]
[    0.137259] pci 0000:00:04.0: [1af4:1041] type 00 class 0x020000
[    0.137579] pci 0000:00:04.0: reg 0x10: [mem 0x7ff3ff00000-0x7ff3ff7ffff 64bit]
[    0.138313] pci 0000:00:05.0: [1af4:1044] type 00 class 0xffff00
[    0.138634] pci 0000:00:05.0: reg 0x10: [mem 0x7ff3fe80000-0x7ff3fefffff 64bit]
[    0.139384] iommu: Default domain type: Translated 
[    0.139613] iommu: DMA domain TLB invalidation policy: lazy mode 
[    0.139901] vgaarb: loaded
[    0.139901] pps_core: LinuxPPS API ver. 1 registered
[    0.139901] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[    0.139901] PTP clock support registered
[    0.139945] PCI: Using ACPI for IRQ routing
[    0.140189] clocksource: Switched to clocksource kvm-clock
[    0.140189] FS-Cache: Loaded
[    0.140189] CacheFiles: Loaded
[    0.140189] pnp: PnP ACPI init
[    0.140189] system 00:00: [mem 0xe8000000-0xe80fffff] has been reserved
[    0.140189] pnp: PnP ACPI: found 2 devices
[    0.145866] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[    0.146285] NET: Registered PF_INET protocol family
[    0.146755] IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
[    0.147368] tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
[    0.147769] TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.148224] TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear)
[    0.148762] TCP: Hash tables configured (established 32768 bind 32768)
[    0.149084] UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
[    0.149417] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
[    0.149779] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    0.150040] pci_bus 0000:00: resource 4 [mem 0xc0000000-0xe7ffffff window]
[    0.150349] pci_bus 0000:00: resource 5 [mem 0x140000000-0x7ff3fffffff window]
[    0.150670] pci_bus 0000:00: resource 6 [io  0x0000-0x0cf7 window]
[    0.150958] pci_bus 0000:00: resource 7 [io  0x0d00-0xffff window]
[    0.151264] PCI: CLS 0 bytes, default 64
[    0.151459] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    0.151751] software IO TLB: mapped [mem 0x00000000bc000000-0x00000000c0000000] (64MB)
[    0.152203] kvm: no hardware support
[    0.152435] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x30eac774559, max_idle_ns: 440795212956 ns
[    0.152961] clocksource: Switched to clocksource tsc
[    0.153204] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    0.155062] Key type blacklist registered
[    0.155329] workingset: timestamp_bits=36 max_order=20 bucket_order=0
[    0.156329] zbud: loaded
[    0.156631] fuse: init (API version 7.34)
[    0.163593] NET: Registered PF_ALG protocol family
[    0.163894] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
[    0.164534] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
[    0.164934] ACPI: button: Power Button [PWRB]
[    0.165981] virtio-pci 0000:00:01.0: enabling device (0000 -> 0002)
[    0.166509] virtio-pci 0000:00:02.0: enabling device (0000 -> 0002)
[    0.167042] virtio-pci 0000:00:03.0: enabling device (0000 -> 0002)
[    0.167556] virtio-pci 0000:00:04.0: enabling device (0000 -> 0002)
[    0.168060] virtio-pci 0000:00:05.0: enabling device (0000 -> 0002)
[    0.168591] Serial: 8250/16550 driver, 1 ports, IRQ sharing disabled
[    0.168920] 00:01: ttyS0 at I/O 0x3f8 (irq = 24, base_baud = 115200) is a 16550A
[    0.170647] Non-volatile memory driver v1.3
[    0.171298] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[    0.171805] random: fast init done
[    0.172290] random: crng init done
[    0.172944] brd: module loaded
[    0.174712] loop: module loaded
[    0.174897] the cryptoloop driver has been deprecated and will be removed in in Linux 5.16
[    0.175760] virtio_blk virtio1: [vda] 8388608 512-byte logical blocks (4.29 GB/4.00 GiB)
[    0.232683] virtio_blk virtio2: [vdb] 756 512-byte logical blocks (387 kB/378 KiB)
[    0.256052] zram: Added device: zram0
[    0.256725] null_blk: module loaded
[    0.257281] tun: Universal TUN/TAP device driver, 1.6
[    0.258894] VFIO - User Level meta-driver version: 0.3
[    0.259367] mousedev: PS/2 mouse device common for all mice
[    0.259980] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
[    0.260857] device-mapper: multipath round-robin: version 1.2.0 loaded
[    0.261379] hid: raw HID events driver (C) Jiri Kosina
[    0.262564] IPVS: Registered protocols ()
[    0.262857] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
[    0.263473] IPVS: ipvs loaded.
[    0.263728] ipip: IPv4 and MPLS over IPv4 tunneling driver
[    0.264189] Initializing XFRM netlink socket
[    0.264552] NET: Registered PF_INET6 protocol family
[    0.265004] Segment Routing with IPv6
[    0.265201] In-situ OAM (IOAM) with IPv6
[    0.265424] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[    0.265786] NET: Registered PF_PACKET protocol family
[    0.266051] NET: Registered PF_KEY protocol family
[    0.266305] Bridge firewalling registered
[    0.266584] NET: Registered PF_VSOCK protocol family
[    0.266890] IPI shorthand broadcast: enabled
[    0.267120] AVX2 version of gcm_enc/dec engaged.
[    0.267436] AES CTR mode by8 optimization enabled
[    0.269634] sched_clock: Marking stable (241922145, 27707800)->(282073464, -12443519)
[    0.270018] registered taskstats version 1
[    0.270214] zswap: loaded using pool lzo/zbud
[    0.270511] Key type ._fscrypt registered
[    0.270685] Key type .fscrypt registered
[    0.270850] Key type fscrypt-provisioning registered
[    0.272811] EXT4-fs (vda): mounted filesystem with ordered data mode. Opts: (null). Quota mode: disabled.
[    0.273542] VFS: Mounted root (ext4 filesystem) on device 254:0.
[    0.274153] devtmpfs: mounted
[    0.275327] Freeing unused decrypted memory: 2036K
[    0.276230] Freeing unused kernel image (initmem) memory: 1648K
[    0.286924] Write protecting the kernel read-only data: 18432k
[    0.288370] Freeing unused kernel image (text/rodata gap) memory: 2040K
[    0.289462] Freeing unused kernel image (rodata/data gap) memory: 856K
cloud-hypervisor: 381.041612ms: <vcpu1> WARN:devices/src/legacy/debug_port.rs:76 -- [Debug I/O port: Kernel code 0x41] 0.380350 seconds
[    0.290099] Run /sbin/init as init process
[    0.312641] systemd[1]: systemd 249.11-0ubuntu3.6 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
[    0.314027] systemd[1]: Detected virtualization kvm.
[    0.314253] systemd[1]: Detected architecture x86-64.

Welcome to Ubuntu 22.04.1 LTS!

[    0.315083] systemd[1]: Hostname set to <localhost.localdomain>.
[    0.375222] systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[    0.376336] systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[    0.388645] systemd[1]: Queued start job for default target Graphical Interface.
[    0.399427] systemd[1]: Created slice Slice /system/getty.
[  OK  ] Created slice Slice /system/getty.
[    0.400444] systemd[1]: Created slice Slice /system/modprobe.
[  OK  ] Created slice Slice /system/modprobe.
[    0.401493] systemd[1]: Created slice Slice /system/serial-getty.
[  OK  ] Created slice Slice /system/serial-getty.
[    0.402392] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[  OK  ] Started Dispatch Password …ts to Console Directory Watch.
[    0.403501] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[  OK  ] Started Forward Password R…uests to Wall Directory Watch.
[    0.404445] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
[  OK  ] Set up automount Arbitrary…s File System Automount Point.
[    0.405303] systemd[1]: Reached target Local Encrypted Volumes.
[  OK  ] Reached target Local Encrypted Volumes.
[    0.405914] systemd[1]: Reached target Path Units.
[  OK  ] Reached target Path Units.
[    0.406407] systemd[1]: Reached target Remote File Systems.
[  OK  ] Reached target Remote File Systems.
[    0.406975] systemd[1]: Reached target Slice Units.
[  OK  ] Reached target Slice Units.
[    0.407482] systemd[1]: Reached target Swaps.
[  OK  ] Reached target Swaps.
[    0.407935] systemd[1]: Reached target Local Verity Protected Volumes.
[  OK  ] Reached target Local Verity Protected Volumes.
[    0.408534] systemd[1]: Listening on initctl Compatibility Named Pipe.
[  OK  ] Listening on initctl Compatibility Named Pipe.
[    0.409217] systemd[1]: Listening on Journal Audit Socket.
[  OK  ] Listening on Journal Audit Socket.
[    0.409760] systemd[1]: Listening on Journal Socket (/dev/log).
[  OK  ] Listening on Journal Socket (/dev/log).
[    0.410333] systemd[1]: Listening on Journal Socket.
[  OK  ] Listening on Journal Socket.
[    0.410842] systemd[1]: Listening on udev Control Socket.
[  OK  ] Listening on udev Control Socket.
[    0.411378] systemd[1]: Listening on udev Kernel Socket.
[  OK  ] Listening on udev Kernel Socket.
[    0.412196] systemd[1]: Mounting Huge Pages File System...
         Mounting Huge Pages File System...
[    0.413618] systemd[1]: Mounting POSIX Message Queue File System...
         Mounting POSIX Message Queue File System...
[    0.414446] systemd[1]: Mounting Kernel Debug File System...
         Mounting Kernel Debug File System...
[    0.414945] systemd[1]: Condition check resulted in Kernel Trace File System being skipped.
[    0.415991] systemd[1]: Starting Journal Service...
         Starting Journal Service...
[    0.416429] systemd[1]: Condition check resulted in Create List of Static Device Nodes being skipped.
[    0.417638] systemd[1]: Starting Load Kernel Module chromeos_pstore...
         Starting Load Kernel Module chromeos_pstore...
[    0.418709] systemd[1]: Starting Load Kernel Module configfs...
         Starting Load Kernel Module configfs...
[    0.419722] systemd[1]: Starting Load Kernel Module efi_pstore...
         Starting Load Kernel Module efi_pstore...
[    0.420755] systemd[1]: Starting Load Kernel Module fuse...
         Starting Load Kernel Module fuse...
[    0.421524] systemd[1]: Starting Load Kernel Module pstore_blk...
         Starting Load Kernel Module pstore_blk...
[    0.422557] systemd[1]: Starting Load Kernel Module pstore_zone...
         Starting Load Kernel Module pstore_zone...
[    0.423401] systemd[1]: Starting Load Kernel Module ramoops...
         Starting Load Kernel Module ramoops...
[    0.424669] systemd[1]: Starting Load Kernel Modules...
         Starting Load Kernel Modules...
[    0.428298] systemd[1]: Starting Remount Root and Kernel File Systems...
         Starting Remount Root and Kernel File Systems...
[    0.429194] systemd[1]: Starting Coldplug All udev Devices...
         Starting Coldplug All udev Devices...
[    0.430179] systemd[1]: Mounted Huge Pages File System.
[  OK  ] Mounted Huge Pages File System.
[    0.430685] systemd[1]: Mounted POSIX Message Queue File System.
[  OK  ] Mounted POSIX Message Queue File System.
[    0.431301] systemd[1]: Mounted Kernel Debug File System.
[  OK  ] Mounted Kernel Debug File System.
[    0.431866] systemd[1]: modprobe@chromeos_pstore.service: Deactivated successfully.
[    0.432289] systemd[1]: Finished Load Kernel Module chromeos_pstore.
[  OK  ] Finished Load Kernel Module chromeos_pstore.
[    0.432963] systemd[1]: modprobe@configfs.service: Deactivated successfully.
[    0.434953] systemd[1]: Finished Load Kernel Module configfs.
[  OK  ] Finished Load Kernel Module configfs.
[    0.435712] systemd[1]: Started Journal Service.
[  OK  ] Started Journal Service.
[  OK  ] Finished Load Kernel Module efi_pstore.
[  OK  ] Finished Load Kernel Module fuse.
[  OK  ] Finished Load Kernel Module pstore_blk.
[  OK  ] Finished Load Kernel Module pstore_zone.
[  OK  ] Finished Load Kernel Module ramoops.
[  OK  ] Finished Load Kernel Modules.
[  OK  ] Finished Remount Root and Kernel File Systems.
         Mounting FUSE Control File System...
         Mounting Kernel Configuration File System...
         Starting Initial cloud-init job (pre-networking)...
         Starting Flush Journal to Persistent Storage...
         Starting Load/Save Random Seed...
         Starting Apply Kernel Variables...
         Starting Create System Users...
[  OK  ] Mounted FUSE Control File System.
[  OK  ] Mounted Kernel Configuration File System.
[    0.480910] systemd-journald[400]: Received client request to flush runtime journal.
[  OK  ] Finished Load/Save Random Seed.
[  OK  ] Finished Create System Users.
         Starting Create Static Device Nodes in /dev...
[  OK  ] Finished Apply Kernel Variables.
[  OK  ] Finished Create Static Device Nodes in /dev.
[  OK  ] Reached target Preparation for Local File Systems.
[  OK  ] Reached target Local File Systems.
         Starting Rule-based Manage…for Device Events and Files...
[  OK  ] Finished Flush Journal to Persistent Storage.
         Starting Create Volatile Files and Directories...
[  OK  ] Finished Coldplug All udev Devices.
[  OK  ] Finished Create Volatile Files and Directories.
         Starting Network Name Resolution...
         Starting Record System Boot/Shutdown in UTMP...
[  OK  ] Finished Record System Boot/Shutdown in UTMP.
[  OK  ] Started Rule-based Manager for Device Events and Files.
[  OK  ] Found device /dev/ttyS0.
[  OK  ] Found device /dev/hvc0.
[  OK  ] Started Network Name Resolution.
[  OK  ] Reached target Host and Network Name Lookups.
[    0.783524] cloud-init[916]: Cloud-init v. 22.3.4-0ubuntu1~22.04.1 running 'init-local' at Thu, 30 Mar 2023 15:23:26 +0000. Up 0.76 seconds.
[  OK  ] Listening on Network Service Netlink Socket.
[  OK  ] Finished Initial cloud-init job (pre-networking).
[  OK  ] Reached target Preparation for Network.
         Starting Network Configuration...
[  OK  ] Started Network Configuration.
[  OK  ] Reached target Network.
         Starting Wait for Network to be Configured...
[  OK  ] Finished Wait for Network to be Configured.
         Starting Initial cloud-ini… (metadata service crawler)...
[    2.439912] cloud-init[949]: Cloud-init v. 22.3.4-0ubuntu1~22.04.1 running 'init' at Thu, 30 Mar 2023 15:23:28 +0000. Up 2.42 seconds.
[    2.446565] cloud-init[949]: ci-info: ++++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++++
[    2.447124] cloud-init[949]: ci-info: +--------+-------+------------------------------+-----------------+--------+-------------------+
[    2.447602] cloud-init[949]: ci-info: | Device |   Up  |           Address            |       Mask      | Scope  |     Hw-Address    |
[    2.448104] cloud-init[949]: ci-info: +--------+-------+------------------------------+-----------------+--------+-------------------+
[    2.448580] cloud-init[949]: ci-info: |  ens4  |  True |          10.244.1.2          | 255.255.255.255 | global | b2:fb:b8:db:3a:47 |
[    2.449058] cloud-init[949]: ci-info: |  ens4  |  True | fe80::b0fb:b8ff:fedb:3a47/64 |        .        |  link  | b2:fb:b8:db:3a:47 |
[    2.449535] cloud-init[949]: ci-info: |   lo   |  True |          127.0.0.1           |    255.0.0.0    |  host  |         .         |
[    2.450008] cloud-init[949]: ci-info: |   lo   |  True |           ::1/128            |        .        |  host  |         .         |
[    2.450477] cloud-init[949]: ci-info: |  sit0  | False |              .               |        .        |   .    |         .         |
[    2.450944] cloud-init[949]: ci-info: | tunl0  | False |              .               |        .        |   .    |         .         |
[    2.451411] cloud-init[949]: ci-info: +--------+-------+------------------------------+-----------------+--------+-------------------+
[    2.451911] cloud-init[949]: ci-info: ++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++
[    2.452325] cloud-init[949]: ci-info: +-------+-------------+-------------+-----------------+-----------+-------+
[    2.452731] cloud-init[949]: ci-info: | Route | Destination |   Gateway   |     Genmask     | Interface | Flags |
[    2.453158] cloud-init[949]: ci-info: +-------+-------------+-------------+-----------------+-----------+-------+
[    2.453569] cloud-init[949]: ci-info: |   0   |   0.0.0.0   | 10.244.1.68 |     0.0.0.0     |    ens4   |   UG  |
[    2.453976] cloud-init[949]: ci-info: |   1   |  10.96.0.10 | 10.244.1.68 | 255.255.255.255 |    ens4   |  UGH  |
[    2.454385] cloud-init[949]: ci-info: |   2   | 10.244.1.68 |   0.0.0.0   | 255.255.255.255 |    ens4   |   UH  |
[    2.454800] cloud-init[949]: ci-info: +-------+-------------+-------------+-----------------+-----------+-------+
[    2.455290] cloud-init[949]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
[    2.455689] cloud-init[949]: ci-info: +-------+-------------+---------+-----------+-------+
[    2.456055] cloud-init[949]: ci-info: | Route | Destination | Gateway | Interface | Flags |
[    2.456408] cloud-init[949]: ci-info: +-------+-------------+---------+-----------+-------+
[    2.456756] cloud-init[949]: ci-info: |   2   |    local    |    ::   |    ens4   |   U   |
[    2.457110] cloud-init[949]: ci-info: |   3   |  fe80::/64  |    ::   |    ens4   |   U   |
[    2.457457] cloud-init[949]: ci-info: |   4   |  multicast  |    ::   |    ens4   |   U   |
[    2.457806] cloud-init[949]: ci-info: +-------+-------------+---------+-----------+-------+
[    2.654383] cloud-init[949]: 2023-03-30 15:23:28,636 - util.py[WARNING]: Failed generating key type rsa to file /etc/ssh/ssh_host_rsa_key
[    2.656440] cloud-init[949]: 2023-03-30 15:23:28,638 - util.py[WARNING]: Failed generating key type dsa to file /etc/ssh/ssh_host_dsa_key
[    2.657138] cloud-init[949]: 2023-03-30 15:23:28,639 - util.py[WARNING]: Failed generating key type ecdsa to file /etc/ssh/ssh_host_ecdsa_key
[    2.657856] cloud-init[949]: 2023-03-30 15:23:28,639 - util.py[WARNING]: Failed generating key type ed25519 to file /etc/ssh/ssh_host_ed25519_key
[  OK  ] Finished Initial cloud-ini…ob (metadata service crawler).
[  OK  ] Reached target Cloud-config availability.
[  OK  ] Reached target Network is Online.
[  OK  ] Reached target System Initialization.
[  OK  ] Started Daily apt download activities.
[  OK  ] Started Daily apt upgrade and clean activities.
[  OK  ] Started Daily dpkg database backup timer.
[  OK  ] Started Periodic ext4 Onli…ata Check for All Filesystems.
[  OK  ] Started Discard unused blocks once a week.
[  OK  ] Started Message of the Day.
[  OK  ] Started Daily Cleanup of Temporary Directories.
[  OK  ] Reached target Timer Units.
[  OK  ] Listening on cloud-init hotplug hook socket.
[  OK  ] Reached target Socket Units.
[  OK  ] Reached target Basic System.
         Starting Apply the settings specified in cloud-config...
         Starting containerd container runtime...
         Starting Remove Stale Onli…t4 Metadata Check Snapshots...
         Starting getty on tty2-tty…nd logind are not available...
[  OK  ] Started kubelet: The Kubernetes Node Agent.
         Starting Permit User Sessions...
[  OK  ] Finished Permit User Sessions.
[  OK  ] Started Getty on tty1.
[  OK  ] Started Getty on tty2.
[  OK  ] Started Getty on tty3.
[  OK  ] Started Serial Getty on hvc0.
[  OK  ] Started Serial Getty on ttyS0.
[  OK  ] Started Getty on tty4.
[  OK  ] Started Getty on tty5.
[  OK  ] Started Getty on tty6.
[  OK  ] Finished getty on tty2-tty… and logind are not available.
[  OK  ] Reached target Login Prompts.
[  OK  ] Finished Remove Stale Onli…ext4 Metadata Check Snapshots.
[  OK  ] Started containerd container runtime.
[  OK  ] Reached target Multi-User System.
[  OK  ] Reached target Graphical Interface.
         Starting Record Runlevel Change in UTMP...
[  OK  ] Finished Record Runlevel Change in UTMP.
[    3.000353] cloud-init[1034]: Cloud-init v. 22.3.4-0ubuntu1~22.04.1 running 'modules:config' at Thu, 30 Mar 2023 15:23:28 +0000. Up 2.93 seconds.
[    3.024609] cloud-init[1034]: 2023-03-30 15:23:29,006 - util.py[WARNING]: Running module locale (<module 'cloudinit.config.cc_locale' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_locale.py'>) failed
[    3.029599] cloud-init[1034]: 2023-03-30 15:23:29,011 - cc_set_passwords.py[WARNING]: Ignoring config 'ssh_pwauth: None'. SSH service 'ssh' is not installed.
[FAILED] Failed to start Apply the …ngs specified in cloud-config.
See 'systemctl status cloud-config.service' for details.
         Starting Execute cloud user/final scripts...
[    3.401233] cloud-init[1047]: Cloud-init v. 22.3.4-0ubuntu1~22.04.1 running 'modules:final' at Thu, 30 Mar 2023 15:23:29 +0000. Up 3.33 seconds.
[    4.627768] cloud-init[1047]: unpacking k8s.gcr.io/kube-apiserver:v1.24.0 (sha256:47f63d1702ac7131bb8e3082efca43f9428ffa76111a32a34e03ebcdc4d09370)...done
[    5.106037] cloud-init[1047]: unpacking k8s.gcr.io/kube-scheduler:v1.24.0 (sha256:f58576d6d4d3b88ecce85c2a4dbb3005ca72f96f216972b40a58763d14b8a358)...done
[    6.348454] cloud-init[1047]: unpacking k8s.gcr.io/kube-proxy:v1.24.0 (sha256:24db41f6978dc574515d67d4e6a4c7212d76af196beccfad2644c0411121113f)...done
[    6.384159] cloud-init[1047]: unpacking k8s.gcr.io/pause:3.7 (sha256:6896172455d5812d4eb0fef2d7145cf3e276dc88f189e4202b91ab12a532ef03)...done
[    6.847240] cloud-init[1047]: unpacking k8s.gcr.io/coredns/coredns:v1.8.6 (sha256:27e5df8b07c92c7250dee82e62b67115b36d348016ccb9ab7b975f02429837e5)...done
[    7.864958] cloud-init[1047]: unpacking k8s.gcr.io/kube-controller-manager:v1.24.0 (sha256:0f3bd17b1acace3e8c706129776299493649f35983e06e770203ff83c1f49c96)...done

Ubuntu 22.04.1 LTS virtink-test-cp-mc9wc ttyS0

virtink-test-cp-mc9wc login: [   10.871422] cloud-init[1047]: unpacking k8s.gcr.io/etcd:3.5.3-0 (sha256:18a820ee8eb4478599d2ad08cf639ea67cfe5240e548b7445fc453cfc3d6b286)...done
[   10.932925] cloud-init[1047]: [init] Using Kubernetes version: v1.24.0
[   10.933309] cloud-init[1047]: [preflight] Running pre-flight checks
[   10.990930] cloud-init[1047]: [preflight] The system verification failed. Printing the output from the verification:
[   10.991509] cloud-init[1047]: KERNEL_VERSION: 5.15.12+
[   10.991899] cloud-init[1047]: OS: Linux
[   10.992211] cloud-init[1047]: CGROUPS_CPU: enabled
[   10.992561] cloud-init[1047]: CGROUPS_CPUSET: enabled
[   10.992917] cloud-init[1047]: CGROUPS_DEVICES: enabled
[   10.993276] cloud-init[1047]: CGROUPS_FREEZER: enabled
[   10.993634] cloud-init[1047]: CGROUPS_MEMORY: enabled
[   10.993991] cloud-init[1047]: CGROUPS_PIDS: enabled
[   10.994342] cloud-init[1047]: CGROUPS_HUGETLB: enabled
[   10.994665] cloud-init[1047]: CGROUPS_BLKIO: missing
[   10.994974] cloud-init[1047]:        [WARNING SystemVerification]: missing optional cgroups: blkio
[   10.995341] cloud-init[1047]:        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exec: "modprobe": executable file not found in $PATH
[   11.219146] cloud-init[1047]: [preflight] Pulling images required for setting up a Kubernetes cluster
[   11.219354] cloud-init[1047]: [preflight] This might take a minute or two, depending on the speed of your internet connection
[   11.219459] cloud-init[1047]: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[   11.325269] cloud-init[1047]: [certs] Using certificateDir folder "/etc/kubernetes/pki"
[   11.325631] cloud-init[1047]: [certs] Using existing ca certificate authority
[   11.420300] cloud-init[1047]: [certs] Generating "apiserver" certificate and key
[   11.420635] cloud-init[1047]: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local virtink-test-cp-mc9wc] and IPs [10.96.0.1 10.244.1.2 10.106.179.50]
[   11.486975] cloud-init[1047]: [certs] Generating "apiserver-kubelet-client" certificate and key
[   11.487379] cloud-init[1047]: [certs] Using existing front-proxy-ca certificate authority
[   11.542024] cloud-init[1047]: [certs] Generating "front-proxy-client" certificate and key
[   11.542411] cloud-init[1047]: [certs] Using existing etcd/ca certificate authority
[   11.605706] cloud-init[1047]: [certs] Generating "etcd/server" certificate and key
[   11.606017] cloud-init[1047]: [certs] etcd/server serving cert is signed for DNS names [localhost virtink-test-cp-mc9wc] and IPs [10.244.1.2 127.0.0.1 ::1]
[   11.775971] cloud-init[1047]: [certs] Generating "etcd/peer" certificate and key
[   11.776390] cloud-init[1047]: [certs] etcd/peer serving cert is signed for DNS names [localhost virtink-test-cp-mc9wc] and IPs [10.244.1.2 127.0.0.1 ::1]
[   11.841823] cloud-init[1047]: [certs] Generating "etcd/healthcheck-client" certificate and key
[   11.884938] cloud-init[1047]: [certs] Generating "apiserver-etcd-client" certificate and key
[   11.885368] cloud-init[1047]: [certs] Using the existing "sa" key
[   11.885695] cloud-init[1047]: [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[   11.958076] cloud-init[1047]: [kubeconfig] Writing "admin.conf" kubeconfig file
[   12.034904] cloud-init[1047]: [kubeconfig] Writing "kubelet.conf" kubeconfig file
[   12.162461] cloud-init[1047]: [kubeconfig] Writing "controller-manager.conf" kubeconfig file
[   12.289907] cloud-init[1047]: [kubeconfig] Writing "scheduler.conf" kubeconfig file
[   12.328829] cloud-init[1047]: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[   12.329552] cloud-init[1047]: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[   12.329976] cloud-init[1047]: [kubelet-start] Starting the kubelet
[  OK  ] Started kubelet: The Kubernetes Node Agent.
[   12.456347] cloud-init[1047]: [control-plane] Using manifest folder "/etc/kubernetes/manifests"
[   12.456744] cloud-init[1047]: [control-plane] Creating static Pod manifest for "kube-apiserver"
[   12.457089] cloud-init[1047]: [control-plane] Creating static Pod manifest for "kube-controller-manager"
[   12.457458] cloud-init[1047]: [control-plane] Creating static Pod manifest for "kube-scheduler"
[   12.457780] cloud-init[1047]: [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[   12.466826] cloud-init[1047]: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[  OK  ] Started Kubernetes systemd probe.
[  OK  ] Created slice libcontainer container kubepods.slice.
[  OK  ] Created slice libcontainer…iner kubepods-burstable.slice.
[  OK  ] Created slice libcontainer…ner kubepods-besteffort.slice.
[  OK  ] Created slice libcontainer…4410bcd896e5d3034677455.slice.
[  OK  ] Created slice libcontainer…b68eaeb8fadedd43d6932d5.slice.
[  OK  ] Created slice libcontainer…9ec9a6ae9cd7848eb4f5193.slice.
[  OK  ] Created slice libcontainer…4d5486b79d96b27276281a4.slice.
[  OK  ] Started libcontainer conta…35e7873304d89da2f2e6c3e1aaa62.
[  OK  ] Started libcontainer conta…fec731a7a8abced8ab9cef4e39c39.
[  OK  ] Started libcontainer conta…dd3504ba9e149b93405e22435e33f.
[  OK  ] Started libcontainer conta…637889142f699f4ef20c467dea707.
[  OK  ] Started libcontainer conta…dd2de555f1cea141fe653231a57f4.
[  OK  ] Started libcontainer conta…f0581cd2aaacf7af0254a85ae02c0.
[  OK  ] Started libcontainer conta…c64c33534ebccb3576751d22b559a.
[  OK  ] Started libcontainer conta…58b419cf43e57e24fc0f144eb30b1.
[   25.995245] cloud-init[1047]: [apiclient] All control plane components are healthy after 13.530032 seconds
[   25.995515] cloud-init[1047]: [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[   26.000372] cloud-init[1047]: [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[   26.511750] cloud-init[1047]: [upload-certs] Skipping phase. Please see --upload-certs
[   26.512030] cloud-init[1047]: [mark-control-plane] Marking the node virtink-test-cp-mc9wc as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[   26.512269] cloud-init[1047]: [mark-control-plane] Marking the node virtink-test-cp-mc9wc as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[   27.021362] cloud-init[1047]: [bootstrap-token] Using token: 22w9vz.a7jn2jbacbpjbo26
[   27.021654] cloud-init[1047]: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[   27.025211] cloud-init[1047]: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[   27.028555] cloud-init[1047]: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[   27.030430] cloud-init[1047]: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[   27.032115] cloud-init[1047]: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[   27.034138] cloud-init[1047]: [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[   27.041398] cloud-init[1047]: [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
         Stopping kubelet: The Kubernetes Node Agent...
[  OK  ] Stopped kubelet: The Kubernetes Node Agent.
[  OK  ] Started kubelet: The Kubernetes Node Agent.
[   27.278826] cloud-init[1047]: [addons] Applied essential addon: CoreDNS
[  OK  ] Started Kubernetes systemd probe.
[   27.428657] cloud-init[1047]: [addons] Applied essential addon: kube-proxy
[   27.429521] cloud-init[1047]: Your Kubernetes control-plane has initialized successfully!
[   27.430124] cloud-init[1047]: To start using your cluster, you need to run the following as a regular user:
[   27.430823] cloud-init[1047]:   mkdir -p $HOME/.kube
[   27.431415] cloud-init[1047]:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[   27.432042] cloud-init[1047]:   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[   27.432462] cloud-init[1047]: Alternatively, if you are the root user, you can run:
[   27.432874] cloud-init[1047]:   export KUBECONFIG=/etc/kubernetes/admin.conf
[   27.433253] cloud-init[1047]: You should now deploy a pod network to the cluster.
[   27.433671] cloud-init[1047]: Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
[   27.434162] cloud-init[1047]:   https://kubernetes.io/docs/concepts/cluster-administration/addons/
[   27.434624] cloud-init[1047]: You can now join any number of control-plane nodes by copying certificate authorities
[   27.435287] cloud-init[1047]: and service account keys on each node and then running the following as root:
[   27.435838] cloud-init[1047]:   kubeadm join 10.106.179.50:6443 --token 22w9vz.a7jn2jbacbpjbo26 \
[   27.436338] cloud-init[1047]:        --discovery-token-ca-cert-hash sha256:e5d383f08735e7f8ecaca3ce1cef2183e7000b920beab006f41ccb7da73a1092 \
[   27.436976] cloud-init[1047]:        --control-plane
[   27.437258] cloud-init[1047]: Then you can join any number of worker nodes by running the following on each as root:
[   27.437817] cloud-init[1047]: kubeadm join 10.106.179.50:6443 --token 22w9vz.a7jn2jbacbpjbo26 \
[   27.438298] cloud-init[1047]:        --discovery-token-ca-cert-hash sha256:e5d383f08735e7f8ecaca3ce1cef2183e7000b920beab006f41ccb7da73a1092
ci-info: no authorized SSH keys fingerprints found for user ubuntu.

[   27.454952] cloud-init[1047]: Cloud-init v. 22.3.4-0ubuntu1~22.04.1 finished at Thu, 30 Mar 2023 15:23:53 +0000. Datasource DataSourceNoCloud [seed=/dev/vdb][dsmode=net].  Up 27.45 seconds
[  OK  ] Finished Execute cloud user/final scripts.
[  OK  ] Reached target Cloud-init target.

Nothing else happens after cloud-init script finishes.

CAPI logs

NAME                                                                READY  SEVERITY  REASON                                     SINCE  MESSAGE                                                                               
Cluster/virtink-test                                                False  Info      WaitingForKubeadmInit                      64m                                                                                           
│           ├─ControlPlaneInitialized                               False  Info      WaitingForControlPlaneProviderInitialized  65m    Waiting for control plane provider to indicate the control plane has been initialized  
│           ├─ControlPlaneReady                                     False  Info      WaitingForKubeadmInit                      64m                                                                                           
│           └─InfrastructureReady                                   True                                                        65m                                                                                           
├─ClusterInfrastructure - VirtinkCluster/virtink-test                                                                                                                                                                         
└─ControlPlane - KubeadmControlPlane/virtink-test-cp                False  Info      WaitingForKubeadmInit                      64m                                                                                           
  │           ├─Available                                           False  Info      WaitingForKubeadmInit                      65m                                                                                           
  │           ├─CertificatesAvailable                               True                                                        65m                                                                                           
  │           ├─MachinesCreated                                     True                                                        65m                                                                                           
  │           ├─MachinesReady                                       True                                                        64m                                                                                           
  │           └─Resized                                             True                                                        64m                                                                                           
  └─Machine/virtink-test-cp-kqgpd                                   True                                                        64m                                                                                           
    │           ├─BootstrapReady                                    True                                                        65m                                                                                           
    │           ├─InfrastructureReady                               True                                                        64m                                                                                           
    │           └─NodeHealthy                                       False  Info      WaitingForNodeRef                          65m                                                                                           
    └─MachineInfrastructure - VirtinkMachine/virtink-test-cp-mc9wc                                                                                                                                                            

Virtink controller logs

2023-03-30T15:22:36.996Z        DEBUG   events  Normal  {"object": {"kind":"VirtinkMachine","namespace":"default","name":"virtink-test-cp-bn6kt","uid":"58edd4bd-5fd1-45b2-9e33-e159802d4986","apiVersion":"infrastructure.cluster.x-k8s.io/v1beta1","resourceVersion":"3798"}, "reason": "DeletedVM", "message": "Deleted VM \"virtink-test-cp-bn6kt\""}
2023-03-30T15:22:37.809Z        DEBUG   events  Normal  {"object": {"kind":"VirtinkCluster","namespace":"default","name":"virtink-test","uid":"ab1131a9-ce13-4882-a36d-f306b91bca9d","apiVersion":"infrastructure.cluster.x-k8s.io/v1beta1","resourceVersion":"3825"}, "reason": "DeletedControlPlaneService", "message": "Deleted control plane Service \"virtink-test\""}
2023-03-30T15:23:14.989Z        DEBUG   events  Normal  {"object": {"kind":"VirtinkCluster","namespace":"default","name":"virtink-test","uid":"80f340c1-f786-40a1-aab1-004d2156e1e0","apiVersion":"infrastructure.cluster.x-k8s.io/v1beta1","resourceVersion":"3956"}, "reason": "CreatedControlPlaneService", "message": "Created control plane Service \"virtink-test\""}
2023-03-30T15:23:15.733Z        INFO    owner Machine is nil    {"controller": "virtinkmachine", "controllerGroup": "infrastructure.cluster.x-k8s.io", "controllerKind": "VirtinkMachine", "virtinkMachine": {"name":"virtink-test-cp-mc9wc","namespace":"default"}, "namespace": "default", "name": "virtink-test-cp-mc9wc", "reconcileID": "ee5d42be-a6d9-4835-a0e8-a0ed99cabb2f"}
2023-03-30T15:23:16.039Z        INFO    bootstrap data is nil   {"controller": "virtinkmachine", "controllerGroup": "infrastructure.cluster.x-k8s.io", "controllerKind": "VirtinkMachine", "virtinkMachine": {"name":"virtink-test-cp-mc9wc","namespace":"default"}, "namespace": "default", "name": "virtink-test-cp-mc9wc", "reconcileID": "09fa259e-6abd-4a78-a039-7f63a0327d5c"}
2023-03-30T15:23:19.634Z        DEBUG   events  Normal  {"object": {"kind":"VirtinkMachine","namespace":"default","name":"virtink-test-cp-mc9wc","uid":"af0263c2-1c46-4711-a1c1-056ad31f0e5d","apiVersion":"infrastructure.cluster.x-k8s.io/v1beta1","resourceVersion":"4028"}, "reason": "CreatedVM", "message": "Created VM \"virtink-test-cp-mc9wc\""}

What I expect to happen

A virtink cluster to be created

Env

cluster-api-viritn-version : latest kubernetes version:

fengye87 commented 1 year ago

Hi @Stringls , from the log above, I believe the CP node has been up and initialized successfully. Could you confirm if you can access CP service of the external cluster from your mgmt cluster? If the answer is yes, it would help diagnose the problem if you could share logs of CAPI controllers.

carezkh commented 1 year ago

@Stringls Thanks for reporting! May i ask you to confirm the CP service type? Currently the LoadBalancer is required in external Virtink cluster, you can refer to this document for more details https://github.com/smartxworks/cluster-api-provider-virtink/blob/main/docs/external-cluster.md

Stringls commented 1 year ago

@fengye87 @carezkh Thanks for the quick response. I use ClusterIP as I described in the issue I opened earlier and specify controlPlaneEndpoint that is IPv4 of the LB that acts as KubeAPI-server, but as I understand I have to use LoadBalancer. When I use LoadBalancer service type it creates a new service in the external cluster.

The problem is: I use Hetzner cloud and when I create LoadBalancer service it tries to create a LB in Hetzner cloud, but it needs some configuration that I have to put in annotations: section of the service, as I understand I cannot specify any annotations in VirtinkCluster CRD for controlPlaneServiceType. On the other side, I already have an existing LoadBalancer service with external IP but it's not pointed on the API server.

So right now I have only one service with external IP

NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP                            PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.110.119.214   **************   80:32190/TCP,443:30871/TCP   19h
ingress-nginx-controller-admission   ClusterIP      10.102.136.251   <none>                                 443/TCP                      19h
carezkh commented 1 year ago

@Stringls I want to make sure I understand this correctly. The LB service created by VirtinkCluster controller in Hetzner cloud will not be assigned an external IP correctly, because the LB service does not have annotations required by Hetzner cloud.

Currently, you can not specify any annotations in VirtinkCluster CP service, the supports are on the road! As a workaround, can you try to append the annotations to the LB service on Hetzner cloud manually?

Stringls commented 1 year ago

@carezkh I have an update on the issue. I've added a field controlPlaneServiceType: LoadBalancer. It deploys the service on the external cluster, but I get the error about requirements of the annotations (Would be able to send it later).

When I manually add the below annotations to the service everything is setup properly

        annotations:
          load-balancer.hetzner.cloud/location: fsn1
          load-balancer.hetzner.cloud/use-private-ip: "false"
          load-balancer.hetzner.cloud/ipv6-disabled: "true"
          load-balancer.hetzner.cloud/disable-private-ingress: **"true"**

LB in the Hetzner cloud is up and running, the MD is pointed as a target server and virtink cluster is initialized.

It's not related to the issue, but when I tried to install Calico on it, the cluster was unable to an image from the public registry registry1.docker.io. The error is about DNS name resolution. I do not have logs of it rn, but I hope you get the problem.

carezkh commented 1 year ago

@Stringls Are you using the default Virtink VM rootfs image smartxworks/capch-rootfs-1.24.0 ?

Stringls commented 1 year ago

@carezkh Yes, I use smartxworks/capch-rootfs-1.24.0 smartxworks/capch-kernel-5.15.12 for both CP and MD

carezkh commented 1 year ago

@Stringls Sorry for late to reply! May i ask you to confirm that the pod subnet and service subnet of the nested cluster don't overlap with the host cluster‘s pod subnet, service subnet or physical subnet?

The host cluster here refers to the external Virtink cluster, and if the service subnet of nested cluster overlaps with that of host cluster, the DNS may not function in nested cluster.

[update] Actually we have a handy tool knest to build nested K8S cluster based on Virtink and Cluster API provider. Please refer to the project for more usage guides and known issuses, and feel free to give us feedback by opening issues.

Stringls commented 1 year ago

@carezkh Hi! It was actually a problem. Basically it's setup like this. I guess I've got this overlapping problem because I deploy CAPI workload cluster onto CAPI workload cluster.

spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12

I set services.cidrBlocks to 10.98.0.0/16 :) . Could you please give any tips which CIDR block for services subnet is better to use? One small thing that's probably not related to this issue. When I delete a virtink cluster on a host cluster from the mgmt cluster, the virtink cluster cannot be deleted completely, it's just stuck in a Deleting loop and I believe it happens because the LB service is being deleted with Pods simultaneously. So is it possible to set a priority of deleting resources of virtink cluster? To summarize, I've got a working cluster, but I need manually to:

Currently, you can not specify any annotations in VirtinkCluster CP service, the supports are on the road!

May I ask you when annotations support in VirtinkCluster CP service will be added? Could you please share how you deploy a LB service on a host cluster, I would be glad to contribute to speed it up. Thanks!

carezkh commented 1 year ago

@Stringls There are 3 private IPv4 subnets: 10.0.0.0/8 (in class A), 172.16.0.0/12 (in class B) and 192.168.0.0/16 (in class C), you can choose one or the subnet of them for pod/service subnet, just avoid overlapping.

How did you delete the Virtink cluster, by command kubectl delete -f cluster-template-xx.yaml? If so, the Virtink cluster can not be deleted successfully. Currently, you should delete cluster.v1beta1.cluster.x-k8s.io (not Virtink cluster) first by using command kubectl delete cluster.v1beta1.cluster.x-k8s.io <cluster-name>, leave the cluster related resources for controllers to delete.

To support annotations in VirtinkCluster CP service, you should update the field of the VirtinkClusterSpec, and update the function buildControlPlaneService, some manifests and documents should be updated at the same time.

[update]: You can try commit https://github.com/smartxworks/cluster-api-provider-virtink/pull/35, use command skaffold run to deploy the controllers on your management cluster, refer to the document installing skaffold to install this tool.

Stringls commented 1 year ago

@carezkh Thanks! I was deleting it by command kubectl delete -f cluster-template.yaml. It works if use kubectl delete cluster <cluster-name> command.

One more question if you don't mind: Is it possible to expose virtink cluster to the internet? I'd like to do this with Ingress NGINX

carezkh commented 1 year ago

@Stringls Do you mean exposing the nested Virtink cluster CP Service to the outside world by using Ingress instead of LoadBalancer or NodePort Service?

Currently, there is no support to use Ingress as CP Service, we will consider implementing it. Any patches from customers will be appreciated!

Stringls commented 1 year ago

@carezkh Sorry, my question wasn't clear enough.

NAMESPACE       NAME                                 TYPE           CLUSTER-IP        EXTERNAL-IP   PORT(S)                      AGE
default         kubernetes                           ClusterIP      192.168.0.1       <none>        443/TCP                      16h
ingress-nginx   ingress-nginx-controller             LoadBalancer   192.168.192.250   <pending>     80:30680/TCP,443:31654/TCP   16h
ingress-nginx   ingress-nginx-controller-admission   ClusterIP      192.168.198.209   <none>        443/TCP                      16h
kube-system     kube-dns                             ClusterIP      192.168.0.10      <none>        53/UDP,53/TCP,9153/TCP       16h

When I try to deploy Ingress NGINX helm chart onto virtink cluster the LB service stays in the pending state that is okay I guess. So the question is:

^ That's actually my main point of using virtink clusters :)

carezkh commented 1 year ago

@Stringls It seems there is no LoadBalancer Service controller in your nested K8S cluster (the cluster you created on host Virtink cluster by Cluster API provider, there are some mistakes in the previous description), so the LoadBalancer Service in your nested K8S cluster will never be assigned any extenal IP.

You can try MetalLB as LB Service controller in your nested K8S cluster, but I don't recommend to use LoadBalancer ingress-nginx-controller Service here! It's because if the host Virtink cluster does not use bridge mode CNI, the external IP assigned to the ingress-nginx-controller Service in your nested K8S cluster may not be accessible in host Virtink cluster. The bridge mode CNI here refers to Kube-OVN, Everoute and etc, not included Calico (works with BGP protocol). And if you can not access the nested K8S cluster's LoadBalancer ingress-nginx-controller Service in your host Virtink cluster, you can not proxy it to the outside world.

Instead, it's recommended to use NodePort ingress-nginx-controller Service in your nested K8S cluster, refer to bare-metal-clusters for more details. Now, you can access this ingress-nginx-controller Service in the host Virtink cluster through <node-ip>:<node-port>, the node-ip here is one of the nested K8S cluster node's IP, the node is our Virtink VM, and the VM's IP came from VM pod running in your host Virtink Cluster.

Now you can proxy the ingress-nginx-controller Service in the nested K8S cluster to the outside world by using LoadBalancer Service in your host Virtink cluster, with selector to VM pod's label and targetPort to the <node-port> above.

Stringls commented 1 year ago

@carezkh I apologize for late reply and thank you for time. I took a look at your suggestion and tried to implement it. The proxy seems to be working and I can access apps by referring to its NodePort as a targertPort in LoadBalancer that is in a host Virtink cluster.

Is there a way to automate the above process: with checking what NodePort is, checking on which VM pod's application has been deployed, checking IP of this pod? I'm wondering how to implement all the above in a CI pipeline :)

Update: Could I use here kube proxy?

carezkh commented 1 year ago

@Stringls Maybe you can achieve it by a script, and some tips are:

Does the kube proxy here refer to command kubectl proxy or component kube-proxy ?

Stringls commented 1 year ago

@carezkh I am so sorry for answering late, I am not working on it right now :( I meant kubectl proxy. If it's possible to avoid creating a new LB for exposing apps I deploy onto Virtink cluster, it would be great.

create LB Service with selector cluster.x-k8s.io/cluster-name: in host Virtink cluster. The cluster-name here is the name of cluster.v1beta1.cluster.x-k8s.io that you created, and the LB Service will choose VM Pods as its endpoints.

Perhaps I do something wrong, but if I set this selector then it's not working stably. I believe it's because cluster.x-k8s.io/cluster-name: is present on all machines and LB is doing round-robin, while my application is deployed on specific VM Pod, so it gets timeout error. I tested it with logging to dnsutils pod that created in the same namespace along with Virtink pods and curling <node-ip>:<node-port>. When I deploy an app to one node and continuously curling this node:port then everything is okay. That's why I used selector on one of the nodes

carezkh commented 1 year ago

@Stringls Of course you can use the command kubectl proxy to proxy traffic of specific VM node to outside world, rather than using LoadBalancer Service in host Virtink cluster.

When I deploy an app to one node and continuously curling this node:port then everything is okay.

It looks like you configured the field externalTrafficPolicy or internalTrafficPolicy of the NodePort Service in nested K8S cluster to Local ?

Stringls commented 1 year ago

@carezkh Cool, thanks!

It looks like you configured the field externalTrafficPolicy or internalTrafficPolicy of the NodePort Service in nested K8S cluster to Local ?

I cannot be sure that we've used Local for externalTrafficPolicy, but when we fixed the issue with networking it seems to be fine. I'm gonna close the issue. Thank you @fengye87 @carezkh so much for help and what you are doing, that's amazing project!