siderolabs / sidero

Sidero Metal is a bare metal provisioning system with support for Kubernetes Cluster API.
https://www.sidero.dev
Mozilla Public License 2.0
403 stars 63 forks source link

x509: certificate signed by unknown authority #1033

Closed japtain-cack closed 1 year ago

japtain-cack commented 1 year ago

I've been having a really hard time getting Talos/kubernetes to run properly. This is my first time using pxe boot, so most of that was my fault, but I'm stumped on this one. I have proxmox vms, esx vms, and lenovo hardware that I'm trying to get ingested into this system. I decided to start with the proxmox vms, since I figured that was fairly standard, and I have the serial console output below.

The issue appears to be an untrusted certificate authority, invalid cert, or some other PKI issue at the very least. I'm simply following the getting started guide, and I've also walked through the bootstrap guide. I read an issue here that adding my endpoint dns record to the machine.certSANs might fix that, but as I entered it during the bootstrap process as my endpoint, it was already there. The x509 errors continue until a timeout is hit, then it reboots.

Cluster setup

talosctl cluster create --kubernetes-version 1.26.1 --talos-version v1.3.2 --nameservers=10.100.1.1,10.100.50.100 --name sidero-demo -p 69:69/udp,8081:8081/tcp,51821:51821/udp --workers 0 --endpoint talos.mimir-tech.org
kubectl taint node sidero-demo-controlplane-1 node-role.kubernetes.io/control-plane:NoSchedule-
export SIDERO_CONTROLLER_MANAGER_HOST_NETWORK=true
export SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=talos.mimir-tech.org
export SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT=talos.mimir-tech.org

clusterctl init -b talos -c talos -i sidero
export CONTROL_PLANE_SERVERCLASS=masters
export WORKER_SERVERCLASS=workers
export TALOS_VERSION=v1.3.2
export KUBERNETES_VERSION=v1.26.1
export CONTROL_PLANE_PORT=6443
export CONTROL_PLANE_ENDPOINT=talos.mimir-tech.org

clusterctl generate cluster cp01 -i sidero > cp01.yaml
kubectl get talosconfig -n sidero-system -l cluster.x-k8s.io/cluster-name=cp01 -o yaml -o jsonpath='{.items[0].status.talosConfig}' > cp01-talosconfig.yaml

Serial output from proxmox efi enabled vm.

[    1.219793] ------------[ cut here ]------------
[    1.220593] x86/mm: Found insecure W+X mapping at address 0xffffffffff620000
[    1.221611] WARNING: CPU: 1 PID: 1 at arch/x86/mm/dump_pagetables.c:246 note_page+0x642/0x6b0
[    1.222737] Modules linked in:
[    1.223446] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.15.83-talos #1
[    1.224398] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
[    1.225491] RIP: 0010:note_page+0x642/0x6b0
[    1.226305] Code: 85 49 ff ff ff e9 8d fe ff ff 80 3d d7 63 87 03 00 0f 85 6e fa ff ff 48 c7 c7 68 65 54 aa c6 05 c3 63 87 03 01 e8 fc a5 9b 01 <0f> 0b e9 54 fa ff ff 48 c7 c6 bf 66 54 aa 4c 89 ff e8 78 77 2a 00
[    1.228701] RSP: 0000:ffffbe9bc001fce8 EFLAGS: 00010282
[    1.229600] RAX: 0000000000000000 RBX: ffffbe9bc001fea0 RCX: ffffffffab563048
[    1.230643] RDX: 0000000000000000 RSI: 00000000ffffdfff RDI: ffffffffab483000
[    1.231678] RBP: 0000000000000006 R08: 0000000000000000 R09: ffffbe9bc001fb20
[    1.232710] R10: ffffbe9bc001fb18 R11: 0000000000000003 R12: 0000000000000004
[    1.233739] R13: ffffffffff621000 R14: 0000000000000616 R15: 0000000000000000
[    1.234750] FS:  0000000000000000(0000) GS:ffff9fb1ff700000(0000) knlGS:0000000000000000
[    1.235819] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    1.236712] CR2: 0000000000000000 CR3: 000000007ce14000 CR4: 00000000000006e0
[    1.237749] Call Trace:
[    1.238455]  <TASK>
[    1.239127]  ? sysvec_apic_timer_interrupt+0xa/0x90
[    1.240011]  ptdump_pte_entry+0x57/0x70
[    1.240811]  walk_pgd_range+0x46e/0x6c0
[    1.241673]  walk_page_range_novma+0x5e/0x90
[    1.242495]  ptdump_walk_pgd+0x42/0xb0
[    1.243341]  ptdump_walk_pgd_level_core+0xc6/0xf0
[    1.244255]  ? ptdump_walk_pgd_level_debugfs+0x40/0x40
[    1.245152]  ? hugetlb_get_unmapped_area+0x2e0/0x2e0
[    1.245987]  ? rest_init+0xc0/0xc0
[    1.246712]  ? rest_init+0xc0/0xc0
[    1.247430]  kernel_init+0x3d/0x120
[    1.248148]  ret_from_fork+0x22/0x30
[    1.248870]  </TASK>
[    1.249479] ---[ end trace 6d183cb346bb1ae2 ]---
[    1.250296] x86/mm: Checked W+X mappings: FAILED, 2 W+X pages found.
[    1.251221] x86/mm: Checking user space page tables
[    1.252179] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    1.253105] Run /init as init process
[    1.471046] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
[    3.198155] random: crng init done
[    3.202384] [talos] [initramfs] booting Talos v1.3.0
[    3.203278] [talos] [initramfs] mounting the rootfs
[    3.204209] loop0: detected capacity change from 0 to 95616
[    3.238959] [talos] [initramfs] bind mounting /lib/firmware
[    3.240660] [talos] [initramfs] entering the rootfs
[    3.241585] [talos] [initramfs] moving mounts to the new rootfs
[    3.242560] [talos] [initramfs] changing working directory into /root
[    3.243480] [talos] [initramfs] moving /root to /
[    3.244266] [talos] [initramfs] changing root directory
[    3.245078] [talos] [initramfs] cleaning up initramfs
[    3.246032] [talos] [initramfs] executing /sbin/init
[    5.795150] [talos] task setupLogger (1/1): done, 499.034µs
[    5.796139] [talos] phase logger (1/7): done, 1.876442ms
[    5.797092] [talos] phase systemRequirements (2/7): 7 tasks(s)
[    5.798147] [talos] task dropCapabilities (7/7): starting
[    5.811052] [talos] task enforceKSPPRequirements (1/7): starting
[    5.819795] [talos] task setupSystemDirectory (2/7): starting
[    5.820837] [talos] task setupSystemDirectory (2/7): done, 8.74232ms
[    5.821928] [talos] task mountBPFFS (3/7): starting
[    5.822866] [talos] task mountCgroups (4/7): starting
[    5.823803] [talos] task mountPseudoFilesystems (5/7): starting
[    5.824829] [talos] task setRLimit (6/7): starting
[    5.825733] [talos] task dropCapabilities (7/7): done, 21.574363ms
[    5.826970] [talos] task mountPseudoFilesystems (5/7): done, 14.837062ms
[    5.828091] [talos] task setRLimit (6/7): done, 15.952245ms
[    5.829283] [talos] task mountCgroups (4/7): done, 17.154505ms
[    5.830313] [talos] task mountBPFFS (3/7): done, 18.191363ms
[    5.844972] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.849125] 8021q: adding VLAN 0 to HW filter on device eth0
[    5.851168] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.853357] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.856055] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.858843] [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable"}
[    5.862524] [talos] task enforceKSPPRequirements (1/7): done, 51.484031ms
[    5.863541] [talos] phase systemRequirements (2/7): done, 66.449326ms
[    5.864509] [talos] phase integrity (3/7): 1 tasks(s)
[    5.865406] [talos] task writeIMAPolicy (1/1): starting
[    5.866358] audit: type=1807 audit(1674503912.493:2): action=dont_measure fsmagic=0x9fa0 res=1
[    5.867543] audit: type=1807 audit(1674503912.493:3): action=dont_measure fsmagic=0x62656572 res=1
[    5.868740] audit: type=1807 audit(1674503912.493:4): action=dont_measure fsmagic=0x64626720 res=1
[    5.869932] audit: type=1807 audit(1674503912.493:5): action=dont_measure fsmagic=0x1021994 res=1
[    5.871234] ima: policy update completed
[    5.872046] audit: type=1807 audit(1674503912.497:6): action=dont_measure fsmagic=0x1cd1 res=1
[    5.873256] audit: type=1807 audit(1674503912.497:7): action=dont_measure fsmagic=0x42494e4d res=1
[    5.874505] audit: type=1807 audit(1674503912.497:8): action=dont_measure fsmagic=0x73636673 res=1
[    5.875697] audit: type=1807 audit(1674503912.497:9): action=dont_measure fsmagic=0xf97cff8c res=1
[    5.876887] audit: type=1807 audit(1674503912.497:10): action=dont_measure fsmagic=0x43415d53 res=1
[    5.878077] audit: type=1807 audit(1674503912.497:11): action=dont_measure fsmagic=0x27e0eb res=1
[    5.882490] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["10.100.1.1", "10.100.50.100"]}
[    5.885727] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["10.100.1.1"]}
[    5.888672] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "talos-master-01", "domainname": "mimir-tech.org\u0000"}
[    5.891619] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "talos-master-01", "domainname": "mimir-tech.org\u0000"}
[    5.894681] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "10.100.50.111/24", "link": "eth0"}
[    5.897108] [talos] created route {"component": "controller-runtime", "controller": "network.RouteSpecController", "destination": "default", "gateway": "10.100.50.1", "table": "main", "link": "eth0"}
[    5.899802] [talos] controller failed {"component": "controller-runtime", "controller": "runtime.KmsgLogDeliveryController", "error": "error sending logs: dial tcp [fd7d:d264:2f4a:d503::1]:4001: connect: network is unreachable"}
[    5.902696] [talos] task writeIMAPolicy (1/1): done, 26.185048ms
[    5.903886] [talos] adjusting time (slew) by 97.051757ms via 10.100.1.1, state TIME_OK, status STA_NANO | STA_PLL {"component": "controller-runtime", "controller": "time.SyncController"}
[    5.906696] [talos] controller failed {"component": "controller-runtime", "controller": "siderolink.ManagerController", "error": "error accessing SideroLink API: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: lookup talos.mimir-tech.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable\""}
[    5.911467] [talos] controller failed {"component": "controller-runtime", "controller": "v1alpha1.EventsSinkController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [fd7d:d264:2f4a:d503::1]:4002: connect: network is unreachable\""}
[    5.915875] [talos] phase integrity (3/7): done, 40.030335ms
[    5.917063] [talos] phase etc (4/7): 2 tasks(s)
[    5.918196] [talos] task createOSReleaseFile (2/2): starting
[    5.919419] [talos] task CreateSystemCgroups (1/2): starting
[    5.920831] [talos] task createOSReleaseFile (2/2): done, 1.425268ms
[    5.922757] [talos] task CreateSystemCgroups (1/2): done, 4.563844ms
[    5.924175] [talos] phase etc (4/7): done, 7.113873ms
[    5.925485] [talos] phase mountSystem (5/7): 1 tasks(s)
[    5.926846] [talos] task mountStatePartition (1/1): starting
[    5.948255] XFS (sda5): Mounting V5 Filesystem
[    5.954228] XFS (sda5): Ending clean mount
[    5.956383] [talos] task mountStatePartition (1/1): done, 29.538968ms
[    5.957938] [talos] phase mountSystem (5/7): done, 32.452464ms
[    5.960121] [talos] phase config (6/7): 1 tasks(s)
[    5.961354] [talos] node identity established {"component": "controller-runtime", "controller": "cluster.NodeIdentityController", "node_id": "xk7fnOGEnBhjtPl1uBkg7PjpqRAeYIU74KMDeFLfBeeP"}
[    5.964521] [talos] task loadConfig (1/1): starting
[    5.967074] [talos] task loadConfig (1/1): persistence is enabled, using existing config on disk
[    5.968632] [talos] task loadConfig (1/1): done, 7.274634ms
[    5.969840] [talos] phase config (6/7): done, 9.719974ms
[    5.971379] [talos] phase unmountSystem (7/7): 1 tasks(s)
[    5.972866] [talos] task unmountStatePartition (1/1): starting
[    5.990087] XFS (sda5): Unmounting Filesystem
[    6.010324] [talos] task unmountStatePartition (1/1): done, 37.466001ms
[    6.011765] [talos] phase unmountSystem (7/7): done, 40.390462ms
[    6.013137] [talos] initialize sequence: done: 219.27712ms
[    6.014482] [talos] install sequence: 0 phase(s)
[    6.015682] [talos] install sequence: done: 1.198817ms
[    6.016965] [talos] boot sequence: 22 phase(s)
[    6.018089] [talos] phase saveStateEncryptionConfig (1/22): 1 tasks(s)
[    6.019455] [talos] task SaveStateEncryptionConfig (1/1): starting
[    6.020722] [talos] task SaveStateEncryptionConfig (1/1): done, 1.266291ms
[    6.022098] [talos] phase saveStateEncryptionConfig (1/22): done, 4.008409ms
[    6.023496] [talos] phase mountState (2/22): 1 tasks(s)
[    6.024648] [talos] task mountStatePartition (1/1): starting
[    6.030772] [talos] service[machined](Preparing): Running pre state
[    6.033067] [talos] service[machined](Preparing): Creating service runner
[    6.035082] [talos] service[apid](Waiting): Waiting for service "containerd" to be "up", api certificates
[    6.036879] [talos] service[machined](Running): Service started as goroutine
[    6.039095] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[    6.046140] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://talos.mimir-tech.org:6443/api/v1/nodes/talos-master-01?timeout=30s\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[    6.055681] XFS (sda5): Mounting V5 Filesystem
[    6.062170] XFS (sda5): Ending clean mount
[    6.063871] [talos] task mountStatePartition (1/1): done, 39.220819ms
[    6.064923] [talos] phase mountState (2/22): done, 41.427229ms
[    6.065916] [talos] phase validateConfig (3/22): 1 tasks(s)
[    6.066918] [talos] task validateConfig (1/1): starting
[    6.067920] [talos] task validateConfig (1/1): done, 1.002614ms
[    6.068917] [talos] phase validateConfig (3/22): done, 3.002471ms
[    6.069936] [talos] phase saveConfig (4/22): 1 tasks(s)
[    6.070907] [talos] task saveConfig (1/1): starting
[    6.072275] [talos] task saveConfig (1/1): done, 1.366816ms
[    6.073243] [talos] phase saveConfig (4/22): done, 3.306964ms
[    6.074254] [talos] phase memorySizeCheck (5/22): 1 tasks(s)
[    6.075209] [talos] task memorySizeCheck (1/1): starting
[    6.076212] [talos] NOTE: recommended memory size is 3946 MiB
[    6.077160] [talos] NOTE: current total memory size is 1939 MiB
[    6.078138] [talos] task memorySizeCheck (1/1): done, 2.928063ms
[    6.079118] [talos] phase memorySizeCheck (5/22): done, 4.864701ms
[    6.080095] [talos] phase diskSizeCheck (6/22): 1 tasks(s)
[    6.081029] [talos] task diskSizeCheck (1/1): starting
[    6.081924] [talos] disk size is OK
[    6.082706] [talos] disk size is 51200 MiB
[    6.083532] [talos] task diskSizeCheck (1/1): done, 2.504088ms
[    6.084500] [talos] phase diskSizeCheck (6/22): done, 4.406013ms
[    6.085466] [talos] phase env (7/22): 1 tasks(s)
[    6.086581] [talos] task setUserEnvVars (1/1): starting
[    6.087708] [talos] task setUserEnvVars (1/1): done, 1.128166ms
[    6.088670] [talos] phase env (7/22): done, 3.204613ms
[    6.089574] [talos] phase containerd (8/22): 1 tasks(s)
[    6.090500] [talos] task startContainerd (1/1): starting
[    6.091423] [talos] service[containerd](Preparing): Running pre state
[    6.092408] [talos] service[containerd](Preparing): Creating service runner
[    6.211727] [talos] controller failed {"component": "controller-runtime", "controller": "v1alpha1.EventsSinkController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [fd7d:d264:2f4a:d503::1]:4002: connect: network is unreachable\""}
[    6.226492] [talos] controller failed {"component": "controller-runtime", "controller": "runtime.KmsgLogDeliveryController", "error": "error sending logs: dial tcp [fd7d:d264:2f4a:d503::1]:4001: connect: network is unreachable"}
[    6.328717] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://talos.mimir-tech.org:6443/api/v1/nodes/talos-master-01?timeout=30s\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[    6.412003] [talos] created new link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "kind": "wireguard"}
[    6.414545] [talos] reconfigured wireguard link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "peers": 1}
[    6.419170] [talos] changed MTU for the link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "mtu": 1280}
[    6.425190] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "fd7d:d264:2f4a:d503:7b9d:59d3:7f64:4ec9/64", "link": "siderolink"}
[    7.033211] [talos] service[apid](Waiting): Waiting for service "containerd" to be "up"
[    7.034667] [talos] service[machined](Running): Health check successful
[    7.333338] [talos] service[containerd](Running): Process Process(["/bin/containerd" "--address" "/system/run/containerd/containerd.sock" "--state" "/system/run/containerd" "--root" "/system/var/lib/containerd"]) started with PID 691
[    7.373166] [talos] service[containerd](Running): Health check successful
[    7.374561] [talos] task startContainerd (1/1): done, 1.290125659s
[    7.375707] [talos] phase containerd (8/22): done, 1.292207111s
[    7.376849] [talos] phase dbus (9/22): 1 tasks(s)
[    7.377885] [talos] service[apid](Preparing): Running pre state
[    7.379334] [talos] task startDBus (1/1): starting
[    7.380536] [talos] service[apid](Preparing): Creating service runner
[    7.384033] [talos] task startDBus (1/1): done, 6.179769ms
[    7.385396] [talos] phase dbus (9/22): done, 8.596992ms
[    7.386687] [talos] phase ephemeral (10/22): 1 tasks(s)
[    7.388025] [talos] task mountEphemeralPartition (1/1): starting
[    7.399470] [talos] formatting the partition "/dev/sda6" as "xfs" with label "EPHEMERAL"
[    7.482476] XFS (sda6): Mounting V5 Filesystem
[    7.490771] XFS (sda6): Ending clean mount
[    7.509158] [talos] task mountEphemeralPartition (1/1): done, 121.819749ms
[    7.510933] [talos] phase ephemeral (10/22): done, 124.953919ms
[    7.512244] [talos] phase var (11/22): 1 tasks(s)
[    7.513320] [talos] task setupVarDirectory (1/1): starting
[    7.514584] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletServiceController", "error": "error writing kubelet PKI: open /etc/kubernetes/bootstrap-kubeconfig: read-only file system"}
[    7.520098] [talos] task setupVarDirectory (1/1): done, 6.81478ms
[    7.521490] [talos] phase var (11/22): done, 9.300702ms
[    7.522711] [talos] phase overlay (12/22): 1 tasks(s)
[    7.523983] [talos] task mountOverlayFilesystems (1/1): starting
[    7.526011] [talos] task mountOverlayFilesystems (1/1): done, 2.042331ms
[    7.527315] [talos] phase overlay (12/22): done, 4.630762ms
[    7.528576] [talos] phase legacyCleanup (13/22): 1 tasks(s)
[    7.529811] [talos] task cleanupLegacyStaticPodFiles (1/1): starting
[    7.531164] [talos] task cleanupLegacyStaticPodFiles (1/1): done, 1.360291ms
[    7.532376] [talos] phase legacyCleanup (13/22): done, 3.822294ms
[    7.533516] [talos] phase udevSetup (14/22): 1 tasks(s)
[    7.534610] [talos] task writeUdevRules (1/1): starting
[    7.535781] [talos] task writeUdevRules (1/1): done, 1.1773ms
[    7.536861] [talos] phase udevSetup (14/22): done, 3.364552ms
[    7.537930] [talos] phase udevd (15/22): 1 tasks(s)
[    7.538846] [talos] task startUdevd (1/1): starting
[    7.540070] [talos] service[udevd](Preparing): Running pre state
[    7.564947] [talos] service[udevd](Preparing): Creating service runner
[    7.575926] [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 716
[    7.576624] udevd[716]: starting version 3.2.11
[    7.586390] udevd[716]: starting eudev-3.2.11
[    7.781976] [talos] service[udevd](Running): Health check successful
[    7.783050] [talos] task startUdevd (1/1): done, 245.590947ms
[    7.784195] [talos] phase udevd (15/22): done, 247.666811ms
[    7.785145] [talos] phase userDisks (16/22): 1 tasks(s)
[    7.786213] [talos] task mountUserDisks (1/1): starting
[    7.787131] [talos] task mountUserDisks (1/1): done, 923.288µs
[    7.788164] [talos] phase userDisks (16/22): done, 3.037685ms
[    7.789229] [talos] phase userSetup (17/22): 1 tasks(s)
[    7.790269] [talos] task writeUserFiles (1/1): starting
[    7.791145] [talos] task writeUserFiles (1/1): done, 881.449µs
[    7.792121] [talos] phase userSetup (17/22): done, 2.908945ms
[    7.793066] [talos] phase lvm (18/22): 1 tasks(s)
[    7.793951] [talos] task activateLogicalVolumes (1/1): starting
[    7.901764] [talos] task activateLogicalVolumes (1/1): done, 108.419059ms
[    7.902840] [talos] phase lvm (18/22): done, 110.418534ms
[    7.903763] [talos] phase startEverything (19/22): 1 tasks(s)
[    7.904818] [talos] task startAllServices (1/1): starting
[    7.905859] [talos] task startAllServices (1/1): waiting for 8 services
[    7.906901] [talos] service[cri](Waiting): Waiting for network
[    7.907917] [talos] service[trustd](Waiting): Waiting for service "containerd" to be "up", time sync, network
[    7.909292] [talos] service[etcd](Waiting): Waiting for service "cri" to be "up", time sync, network, etcd spec
[    7.910656] [talos] service[cri](Preparing): Running pre state
[    7.912844] [talos] service[trustd](Preparing): Running pre state
[    7.914034] [talos] task startAllServices (1/1): service "apid" to be "up", service "containerd" to be "up", service "cri" to be "up", service "etcd" to be "up", service "kubelet" to be "up", service "machined" to be "up", service "trustd" to be "up", service "udevd" to be "up"
[    7.917518] [talos] service[trustd](Preparing): Creating service runner
[    7.918969] [talos] service[cri](Preparing): Creating service runner
[    7.920617] [talos] service[cri](Running): Process Process(["/bin/containerd" "--address" "/run/containerd/containerd.sock" "--config" "/etc/cri/containerd.toml"]) started with PID 1598
[    8.036546] [talos] service[kubelet](Waiting): Waiting for service "cri" to be "up", time sync, network
[    8.503297] [talos] service[trustd](Running): Started task trustd (PID 1647) for container trustd
[    8.510405] [talos] service[apid](Running): Started task apid (PID 1646) for container apid
[    8.905490] [talos] service[etcd](Waiting): Waiting for service "cri" to be "up", etcd spec
[    8.915890] [talos] service[cri](Running): Health check successful
[    8.916937] [talos] service[kubelet](Preparing): Running pre state
[    8.921793] [talos] service[trustd](Running): Health check successful
[    9.900329] [talos] service[etcd](Waiting): Waiting for etcd spec
[   11.414260] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[   11.421745] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://talos.mimir-tech.org:6443/api/v1/nodes/talos-master-01?timeout=30s\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[   12.339027] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://talos.mimir-tech.org:6443/api/v1/nodes/talos-master-01?timeout=30s\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[   12.533698] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[   13.368519] [talos] service[apid](Running): Health check successful
[   14.120641] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://talos.mimir-tech.org:6443/api/v1/nodes/talos-master-01?timeout=30s\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[   15.511030] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[   16.150585] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://talos.mimir-tech.org:6443/api/v1/nodes/talos-master-01?timeout=30s\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[   21.269858] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://talos.mimir-tech.org:6443/api/v1/nodes/talos-master-01?timeout=30s\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[   21.602793] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
[   22.860243] [talos] task startAllServices (1/1): service "etcd" to be "up", service "kubelet" to be "up"
[   27.555807] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://talos.mimir-tech.org:6443/api/v1/nodes/talos-master-01?timeout=30s\": x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"kubernetes\")"}
smira commented 1 year ago

you have same control plane endpoint for the management and workload clusters, this can't work. this is why your workload cluster complains about the PKI mismatch - apparently it tries to talk to the management cluster.

each Kubernetes cluster requires its own control plane endpoint. if you just want to test things, you can use the IP/DNS of the single control plane node as the control plane endpoint: https://<IP>:6443/.

more options described here: https://www.talos.dev/v1.3/introduction/getting-started/#decide-the-kubernetes-endpoint

I'm going to close the ticket, but please feel free to reopen it if you have more issues with this setup.

japtain-cack commented 1 year ago

Ok, I tried this too. As I understand it, I have two clusters. One which was created as a temporary docker host, and one that I'm "trying" to create.

I plan on pivoting the management cluster from sidero-demo to the talos cluster. I have also tried setting up a vip, but I'm not sure if that is working, that IP is 10.100.50.52, but is currently unused. Once I get this all working, I'll change cp.talos.mimir-tech.org to the vip ip. Until then, I'm simply using the first master-01.talos.mimir-tech.org node IP as the talos control plane endpoint.

I'm also having issues with boot orders and getting a proxmox VM to boot after ipxe runs the first time and installs talos. After the initial run, pxe boots, wipes the disk, installs talos, and then reboots. After this reboot, the VM gets an ipxe prompt, then gets an ipxe-exit, and boots into the bios. I have to manually boot from the disk, or change the boot order to boot from the disk first and ipxe second. I have tried using ipxe-sanboot (however I think the efi disk is 0,0, so this says no boot disk), and I've tried http-404, which will work after a long timeout. It appears IPMI in proxmox is not a thing.

talosctl cluster create --kubernetes-version 1.26.1 --talos-version v1.3.2 --nameservers=10.100.1.1,10.100.50.100 --name sidero-demo -p 69:69/udp,8081:8081/tcp,51821:51821/udp --workers 0 --endpoint demo.talos.mimir-tech.org
export SIDERO_CONTROLLER_MANAGER_HOST_NETWORK=true
export SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=demo.talos.mimir-tech.org
export SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT=demo.talos.mimir-tech.org

clusterctl init -b talos -c talos -i sidero
export CONTROL_PLANE_SERVERCLASS=talos_masters
export WORKER_SERVERCLASS=talos_workers
export TALOS_VERSION=v1.3.2
export KUBERNETES_VERSION=v1.26.1
export CONTROL_PLANE_PORT=6443
export CONTROL_PLANE_ENDPOINT=cp.talos.mimir-tech.org

clusterctl generate cluster talos -i sidero > kustomize/talos.yaml
[    3.162752] [talos] [initramfs] booting Talos v1.3.0
[    3.163763] [talos] [initramfs] mounting the rootfs
[    3.164881] loop0: detected capacity change from 0 to 95616
[    3.195278] [talos] [initramfs] bind mounting /lib/firmware
[    3.197539] [talos] [initramfs] entering the rootfs
[    3.198594] [talos] [initramfs] moving mounts to the new rootfs
[    3.199826] [talos] [initramfs] changing working directory into /root
[    3.201059] [talos] [initramfs] moving /root to /
[    3.202052] [talos] [initramfs] changing root directory
[    3.203167] [talos] [initramfs] cleaning up initramfs
[    3.204396] [talos] [initramfs] executing /sbin/init
[    5.713793] [talos] task setupLogger (1/1): done, 4.053392ms
[    5.714926] [talos] phase logger (1/7): done, 5.705652ms
[    5.716026] [talos] phase systemRequirements (2/7): 7 tasks(s)
[    5.717123] [talos] task dropCapabilities (7/7): starting
[    5.718583] [talos] task dropCapabilities (7/7): done, 1.466733ms
[    5.719751] [talos] task enforceKSPPRequirements (1/7): starting
[    5.720840] [talos] task setupSystemDirectory (2/7): starting
[    5.721864] [talos] task setupSystemDirectory (2/7): done, 2.185991ms
[    5.723024] [talos] task mountBPFFS (3/7): starting
[    5.723982] [talos] task mountCgroups (4/7): starting
[    5.725073] [talos] task mountPseudoFilesystems (5/7): starting
[    5.726140] [talos] task setRLimit (6/7): starting
[    5.730053] [talos] task mountPseudoFilesystems (5/7): done, 10.362649ms
[    5.738128] [talos] static pod list url is not available yet; not creating kubelet config {"component": "controller-runtime", "controller": "k8s.KubeletConfigController", "error": "resource StaticPodServerStatuses.kubernetes.talos.dev(k8s/static-pod-server-status@undefined) doesn't exist"}
[    5.747501] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "127.0.0.1/8", "link": "lo"}
[    5.750511] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.753506] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.757024] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.759994] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.762923] [talos] task enforceKSPPRequirements (1/7): done, 34.659726ms
[    5.764306] 8021q: adding VLAN 0 to HW filter on device eth0
[    5.764732] [talos] task setRLimit (6/7): done, 45.037395ms
[    5.767657] [talos] task mountBPFFS (3/7): done, 47.974985ms
[    5.769112] [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable"}
[    5.773702] [talos] task mountCgroups (4/7): done, 54.01515ms
[    5.775188] [talos] phase systemRequirements (2/7): done, 59.163292ms
[    5.776709] [talos] phase integrity (3/7): 1 tasks(s)
[    5.778094] [talos] task writeIMAPolicy (1/1): starting
[    5.779584] audit: type=1807 audit(1674594464.181:2): action=dont_measure fsmagic=0x9fa0 res=1
[    5.781417] audit: type=1807 audit(1674594464.181:3): action=dont_measure fsmagic=0x62656572 res=1
[    5.784093] ima: policy update completed
[    5.785312] audit: type=1807 audit(1674594464.185:4): action=dont_measure fsmagic=0x64626720 res=1
[    5.787922] audit: type=1807 audit(1674594464.185:5): action=dont_measure fsmagic=0x1021994 res=1
[    5.789757] audit: type=1807 audit(1674594464.185:6): action=dont_measure fsmagic=0x1cd1 res=1
[    5.791613] audit: type=1807 audit(1674594464.185:7): action=dont_measure fsmagic=0x42494e4d res=1
[    5.794170] audit: type=1807 audit(1674594464.185:8): action=dont_measure fsmagic=0x73636673 res=1
[    5.796660] audit: type=1807 audit(1674594464.185:9): action=dont_measure fsmagic=0xf97cff8c res=1
[    5.799228] audit: type=1807 audit(1674594464.185:10): action=dont_measure fsmagic=0x43415d53 res=1
[    5.801756] audit: type=1807 audit(1674594464.185:11): action=dont_measure fsmagic=0x27e0eb res=1
[    5.810653] [talos] controller failed {"component": "controller-runtime", "controller": "network.RouteSpecController", "error": "1 error occurred:\n\t* error adding route: netlink receive: network is unreachable, message {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:0 Protocol:3 Scope:0 Type:1 Flags:0 Attributes:{Dst:<nil> Src:10.100.50.111 Gateway:10.100.50.1 OutIface:4 Priority:1024 Table:254 Mark:0 Pref:<nil> Expires:<nil> Metrics:<nil> Multipath:[]}}\n\n"}
[    5.819807] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "master-01", "domainname": "mimir-tech.org\u0000"}
[    5.823053] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "master-01", "domainname": "mimir-tech.org\u0000"}
[    5.826347] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["10.100.1.1", "10.100.50.100"]}
[    5.829694] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["10.100.1.1"]}
[    5.832960] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "10.100.50.111/24", "link": "eth0"}
[    5.836618] [talos] controller failed {"component": "controller-runtime", "controller": "runtime.KmsgLogDeliveryController", "error": "error sending logs: dial tcp [fd28:1f64:7804:c703::1]:4001: connect: network is unreachable"}
[    5.841311] [talos] controller failed {"component": "controller-runtime", "controller": "siderolink.ManagerController", "error": "error accessing SideroLink API: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: lookup demo.talos.mimir-tech.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable\""}
[    5.848164] [talos] controller failed {"component": "controller-runtime", "controller": "v1alpha1.EventsSinkController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [fd28:1f64:7804:c703::1]:4002: connect: network is unreachable\""}
[    5.854280] [talos] task writeIMAPolicy (1/1): done, 51.673584ms
[    5.855839] [talos] phase integrity (3/7): done, 79.13469ms
[    5.857307] [talos] ntp query error with server "10.100.1.1" {"component": "controller-runtime", "controller": "time.SyncController", "error": "dial udp 10.100.1.1:123: connect: network is unreachable"}
[    5.861928] [talos] phase etc (4/7): 2 tasks(s)
[    5.863464] [talos] task createOSReleaseFile (2/2): starting
[    5.865090] [talos] task CreateSystemCgroups (1/2): starting
[    5.866788] [talos] task createOSReleaseFile (2/2): done, 3.339487ms
[    5.869087] [talos] task CreateSystemCgroups (1/2): done, 4.075956ms
[    5.870600] [talos] phase etc (4/7): done, 8.71302ms
[    5.871859] [talos] phase mountSystem (5/7): 1 tasks(s)
[    5.873255] [talos] task mountStatePartition (1/1): starting
[    5.889703] XFS (sda5): Mounting V5 Filesystem
[    5.897724] XFS (sda5): Starting recovery (logdev: internal)
[    5.902093] XFS (sda5): Ending recovery (logdev: internal)
[    5.904359] [talos] task mountStatePartition (1/1): done, 31.111585ms
[    5.905780] [talos] phase mountSystem (5/7): done, 33.921765ms
[    5.907190] [talos] phase config (6/7): 1 tasks(s)
[    5.908301] [talos] node identity established {"component": "controller-runtime", "controller": "cluster.NodeIdentityController", "node_id": "HVKVqaNvKt7OrHUNJLgnmQ6z8Blk3uCJYGDC4OQldMP"}
[    5.911277] [talos] task loadConfig (1/1): starting
[    5.913329] [talos] task loadConfig (1/1): persistence is enabled, using existing config on disk
[    5.914932] [talos] task loadConfig (1/1): done, 6.611141ms
[    5.916186] [talos] phase config (6/7): done, 8.996691ms
[    5.917380] [talos] phase unmountSystem (7/7): 1 tasks(s)
[    5.918528] [talos] task unmountStatePartition (1/1): starting
[    5.921761] XFS (sda5): Unmounting Filesystem
[    5.945810] [talos] task unmountStatePartition (1/1): done, 27.287765ms
[    5.947047] [talos] phase unmountSystem (7/7): done, 29.666555ms
[    5.948124] [talos] initialize sequence: done: 239.413514ms
[    5.949158] [talos] install sequence: 0 phase(s)
[    5.950103] [talos] install sequence: done: 943.577µs
[    5.951125] [talos] boot sequence: 22 phase(s)
[    5.952040] [talos] phase saveStateEncryptionConfig (1/22): 1 tasks(s)
[    5.953126] [talos] task SaveStateEncryptionConfig (1/1): starting
[    5.954197] [talos] task SaveStateEncryptionConfig (1/1): done, 1.071908ms
[    5.955285] [talos] phase saveStateEncryptionConfig (1/22): done, 3.245745ms
[    5.956422] [talos] phase mountState (2/22): 1 tasks(s)
[    5.957400] [talos] task mountStatePartition (1/1): starting
[    5.960335] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.968988] [talos] removed address 10.100.50.111/24 from "eth0" {"component": "controller-runtime", "controller": "network.AddressSpecController"}
[    5.972064] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "talos-frc-c5a", "domainname": ""}
[    5.975263] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.978529] [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable"}
[    5.983461] [talos] service[machined](Preparing): Running pre state
[    5.983769] XFS (sda5): Mounting V5 Filesystem
[    5.984888] [talos] service[machined](Preparing): Creating service runner
[    5.988955] [talos] service[apid](Waiting): Waiting for service "containerd" to be "up", api certificates
[    5.993067] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp: lookup cp.talos.mimir-tech.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable"}
[    5.998001] XFS (sda5): Ending clean mount
[    6.001453] [talos] service[machined](Running): Service started as goroutine
[    6.003337] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["10.100.1.1"]}
[    6.006272] [talos] ntp query error with server "10.100.1.1" {"component": "controller-runtime", "controller": "time.SyncController", "error": "dial udp 10.100.1.1:123: connect: network is unreachable"}
[    6.010918] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "10.100.50.111/24", "link": "eth0"}
[    6.014412] [talos] controller failed {"component": "controller-runtime", "controller": "network.RouteMergeController", "error": "1 conflict(s) detected"}
[    6.017809] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "master-01", "domainname": "mimir-tech.org\u0000"}
[    6.021566] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["10.100.1.1", "10.100.50.100"]}
[    6.025155] [talos] task mountStatePartition (1/1): done, 45.740509ms
[    6.026858] [talos] phase mountState (2/22): done, 70.436603ms
[    6.028435] [talos] phase validateConfig (3/22): 1 tasks(s)
[    6.029970] [talos] task validateConfig (1/1): starting
[    6.031554] [talos] task validateConfig (1/1): done, 1.58357ms
[    6.033080] [talos] phase validateConfig (3/22): done, 4.645879ms
[    6.034603] [talos] phase saveConfig (4/22): 1 tasks(s)
[    6.036022] [talos] task saveConfig (1/1): starting
[    6.037951] [talos] task saveConfig (1/1): done, 1.929344ms
[    6.039473] [talos] phase saveConfig (4/22): done, 4.87047ms
[    6.040958] [talos] phase memorySizeCheck (5/22): 1 tasks(s)
[    6.042479] [talos] task memorySizeCheck (1/1): starting
[    6.043932] [talos] NOTE: recommended memory size is 3946 MiB
[    6.045340] [talos] NOTE: current total memory size is 1939 MiB
[    6.046824] [talos] task memorySizeCheck (1/1): done, 4.344666ms
[    6.048228] [talos] phase memorySizeCheck (5/22): done, 7.271705ms
[    6.049640] [talos] phase diskSizeCheck (6/22): 1 tasks(s)
[    6.051047] [talos] task diskSizeCheck (1/1): starting
[    6.052323] [talos] disk size is OK
[    6.053393] [talos] disk size is 51200 MiB
[    6.054494] [talos] task diskSizeCheck (1/1): done, 3.446906ms
[    6.055803] [talos] phase diskSizeCheck (6/22): done, 6.165037ms
[    6.057089] [talos] phase env (7/22): 1 tasks(s)
[    6.058225] [talos] task setUserEnvVars (1/1): starting
[    6.059405] [talos] task setUserEnvVars (1/1): done, 1.179233ms
[    6.060653] [talos] phase env (7/22): done, 3.56555ms
[    6.061799] [talos] phase containerd (8/22): 1 tasks(s)
[    6.062939] [talos] task startContainerd (1/1): starting
[    6.064053] [talos] service[containerd](Preparing): Running pre state
[    6.065234] [talos] service[containerd](Preparing): Creating service runner
[    6.202096] [talos] controller failed {"component": "controller-runtime", "controller": "v1alpha1.EventsSinkController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [fd28:1f64:7804:c703::1]:4002: connect: network is unreachable\""}
[    6.335723] [talos] controller failed {"component": "controller-runtime", "controller": "runtime.KmsgLogDeliveryController", "error": "error sending logs: dial tcp [fd28:1f64:7804:c703::1]:4001: connect: network is unreachable"}
[    6.450159] [talos] hello failed {"component": "controller-runtime", "controller": "cluster.DiscoveryServiceController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: lookup discovery.talos.dev on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable\"", "endpoint": "discovery.talos.dev:443"}
[    6.498403] [talos] controller failed {"component": "controller-runtime", "controller": "network.RouteMergeController", "error": "1 conflict(s) detected"}
[    6.550895] [talos] controller failed {"component": "controller-runtime", "controller": "siderolink.ManagerController", "error": "error accessing SideroLink API: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: lookup demo.talos.mimir-tech.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable\""}
[    6.676553] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp: lookup cp.talos.mimir-tech.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable"}
[    6.992574] [talos] service[machined](Running): Health check successful
[    6.994129] [talos] service[apid](Waiting): Waiting for service "containerd" to be "up"
[    7.015406] [talos] ntp query error with server "10.100.1.1" {"component": "controller-runtime", "controller": "time.SyncController", "error": "dial udp 10.100.1.1:123: connect: network is unreachable"}
[    7.111004] [talos] created route {"component": "controller-runtime", "controller": "network.RouteSpecController", "destination": "default", "gateway": "10.100.50.1", "table": "main", "link": "eth0"}
[    7.220433] [talos] created new link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "kind": "wireguard"}
[    7.224086] [talos] reconfigured wireguard link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "peers": 1}
[    7.230490] [talos] changed MTU for the link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "mtu": 1280}
[    7.237830] [talos] controller failed {"component": "controller-runtime", "controller": "v1alpha1.EventsSinkController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [fd28:1f64:7804:c703::1]:4002: connect: network is unreachable\""}
[    7.246582] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "fd28:1f64:7804:c703:538e:d445:f5d7:9dc2/64", "link": "siderolink"}
[    7.261673] [talos] service[containerd](Running): Process Process(["/bin/containerd" "--address" "/system/run/containerd/containerd.sock" "--state" "/system/run/containerd" "--root" "/system/var/lib/containerd"]) started with PID 677
[    7.321552] [talos] service[containerd](Running): Health check successful
[    7.323285] [talos] task startContainerd (1/1): done, 1.260343255s
[    7.324713] [talos] service[apid](Preparing): Running pre state
[    7.326357] [talos] phase containerd (8/22): done, 1.26291655s
[    7.327794] [talos] phase dbus (9/22): 1 tasks(s)
[    7.329119] [talos] service[apid](Preparing): Creating service runner
[    7.330593] [talos] task startDBus (1/1): starting
[    7.333965] [talos] task startDBus (1/1): done, 4.826808ms
[    7.335307] [talos] phase dbus (9/22): done, 7.513607ms
[    7.336647] [talos] phase ephemeral (10/22): 1 tasks(s)
[    7.337973] [talos] task mountEphemeralPartition (1/1): starting
[    7.350424] XFS (sda6): Mounting V5 Filesystem
[    7.366732] XFS (sda6): Starting recovery (logdev: internal)
[    7.371305] XFS (sda6): Ending recovery (logdev: internal)
[    7.398085] [talos] task mountEphemeralPartition (1/1): done, 60.091972ms
[    7.399941] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletServiceController", "error": "error writing kubelet PKI: open /etc/kubernetes/bootstrap-kubeconfig: read-only file system"}
[    7.403888] [talos] phase ephemeral (10/22): done, 67.098788ms
[    7.405123] [talos] phase var (11/22): 1 tasks(s)
[    7.406310] [talos] task setupVarDirectory (1/1): starting
[    7.407831] [talos] task setupVarDirectory (1/1): done, 1.53089ms
[    7.409027] [talos] phase var (11/22): done, 3.905442ms
[    7.410108] [talos] phase overlay (12/22): 1 tasks(s)
[    7.411161] [talos] task mountOverlayFilesystems (1/1): starting
[    7.414812] [talos] task mountOverlayFilesystems (1/1): done, 3.649968ms
[    7.415975] [talos] phase overlay (12/22): done, 5.869606ms
[    7.417139] [talos] phase legacyCleanup (13/22): 1 tasks(s)
[    7.418304] [talos] task cleanupLegacyStaticPodFiles (1/1): starting
[    7.419538] [talos] task cleanupLegacyStaticPodFiles (1/1): done, 1.237376ms
[    7.420701] [talos] phase legacyCleanup (13/22): done, 3.562657ms
[    7.421775] [talos] phase udevSetup (14/22): 1 tasks(s)
[    7.422781] [talos] task writeUdevRules (1/1): starting
[    7.423796] [talos] task writeUdevRules (1/1): done, 1.014611ms
[    7.424891] [talos] phase udevSetup (14/22): done, 3.116131ms
[    7.425876] [talos] phase udevd (15/22): 1 tasks(s)
[    7.426838] [talos] task startUdevd (1/1): starting
[    7.427820] [talos] service[udevd](Preparing): Running pre state
[    7.438334] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[    7.469248] [talos] service[udevd](Preparing): Creating service runner
[    7.481224] udevd[701]: starting version 3.2.11
[    7.482357] [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 701
[    7.491897] udevd[701]: starting eudev-3.2.11
[    7.708255] [talos] service[udevd](Running): Health check successful
[    7.709384] [talos] task startUdevd (1/1): done, 282.543089ms
[    7.710535] [talos] phase udevd (15/22): done, 284.65925ms
[    7.711545] [talos] phase userDisks (16/22): 1 tasks(s)
[    7.712517] [talos] task mountUserDisks (1/1): starting
[    7.713492] [talos] task mountUserDisks (1/1): done, 975.271µs
[    7.714534] [talos] phase userDisks (16/22): done, 2.990343ms
[    7.715541] [talos] phase userSetup (17/22): 1 tasks(s)
[    7.716521] [talos] task writeUserFiles (1/1): starting
[    7.717507] [talos] task writeUserFiles (1/1): done, 986.487µs
[    7.718539] [talos] phase userSetup (17/22): done, 2.998781ms
[    7.719526] [talos] phase lvm (18/22): 1 tasks(s)
[    7.720439] [talos] task activateLogicalVolumes (1/1): starting
[    7.792053] [talos] service[kubelet](Waiting): Waiting for service "cri" to be "up", time sync, network
[    7.842809] [talos] task activateLogicalVolumes (1/1): done, 122.362499ms
[    7.844038] [talos] phase lvm (18/22): done, 124.511335ms
[    7.845113] [talos] phase startEverything (19/22): 1 tasks(s)
[    7.846414] [talos] task startAllServices (1/1): starting
[    7.858265] [talos] task startAllServices (1/1): waiting for 8 services
[    7.864191] [talos] service[etcd](Waiting): Waiting for service "cri" to be "up", time sync, network, etcd spec
[    7.870415] [talos] service[cri](Waiting): Waiting for network
[    7.871799] [talos] service[cri](Preparing): Running pre state
[    7.872897] [talos] service[trustd](Waiting): Waiting for service "containerd" to be "up", time sync, network
[    7.874908] [talos] service[cri](Preparing): Creating service runner
[    7.876092] [talos] task startAllServices (1/1): service "apid" to be "up", service "containerd" to be "up", service "cri" to be "up", service "etcd" to be "up", service "kubelet" to be "up", service "machined" to be "up", service "trustd" to be "up", service "udevd" to be "up"
[    7.885645] [talos] service[cri](Running): Process Process(["/bin/containerd" "--address" "/run/containerd/containerd.sock" "--config" "/etc/cri/containerd.toml"]) started with PID 1575
[    8.021016] [talos] adjusting time (slew) by 94.171215ms via 10.100.1.1, state TIME_OK, status STA_NANO | STA_PLL {"component": "controller-runtime", "controller": "time.SyncController"}
[    8.023824] [talos] service[trustd](Preparing): Running pre state
[    8.025134] [talos] service[trustd](Preparing): Creating service runner
[    8.038399] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[    8.399972] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[    8.423265] [talos] service[trustd](Running): Started task trustd (PID 1634) for container trustd
[    8.427674] [talos] service[apid](Running): Started task apid (PID 1592) for container apid
[    8.793266] [talos] service[kubelet](Waiting): Waiting for service "cri" to be "up"
[    8.867802] [talos] service[etcd](Waiting): Waiting for service "cri" to be "up", etcd spec
[    8.879779] [talos] service[cri](Running): Health check successful
[    8.881566] [talos] service[kubelet](Preparing): Running pre state
[    8.895372] [talos] service[kubelet](Preparing): Creating service runner
[    8.999763] [talos] service[kubelet](Running): Started task kubelet (PID 1688) for container kubelet
[    9.042936] [talos] service[trustd](Running): Health check successful
[    9.415089] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[    9.862681] [talos] service[etcd](Waiting): Waiting for etcd spec
[   10.593837] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   10.913345] [talos] service[kubelet](Running): Health check successful
[   12.044102] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   12.132149] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   13.315046] [talos] service[apid](Running): Health check successful
[   15.147806] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   15.240854] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   19.011750] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   20.692453] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[   22.820326] [talos] task startAllServices (1/1): service "etcd" to be "up"
[   29.965845] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   29.972610] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   36.402114] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[   36.829310] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
japtain-cack commented 1 year ago

Here is my talos cluster yaml

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: talos
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 10.244.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
    kind: TalosControlPlane
    name: talos-cp
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: MetalCluster
    name: talos
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MetalCluster
metadata:
  name: talos
spec:
  controlPlaneEndpoint:
    host: cp.talos.mimir-tech.org
    port: 6443
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MetalMachineTemplate
metadata:
  name: talos-cp
spec:
  template:
    spec:
      serverClassRef:
        apiVersion: metal.sidero.dev/v1alpha1
        kind: ServerClass
        name: talos-masters
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: TalosControlPlane
metadata:
  name: talos-cp
spec:
  controlPlaneConfig:
    controlplane:
      generateType: controlplane
      talosVersion: v1.3.2
      configPatches:
      - op: add
        path: /machine/network
        value:
          interfaces:
          - interface: eth0
            dhcp: true
            vip:
              ip: 10.100.50.52
    init:
      configPatches:
      - op: add
        path: /machine/network
        value:
          interfaces:
          - interface: eth0
            dhcp: true
            vip:
              ip: 10.100.50.52
      generateType: init
      talosVersion: v1.3.2
  infrastructureTemplate:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: MetalMachineTemplate
    name: talos-cp
  replicas: 3
  version: v1.26.1
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: TalosConfigTemplate
metadata:
  name: talos-workers
spec:
  template:
    spec:
      generateType: join
      talosVersion: v1.3.2
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
  name: talos-workers
spec:
  clusterName: talos
  replicas: 0
  selector:
    matchLabels: null
  template:
    spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
          kind: TalosConfigTemplate
          name: talos-workers
      clusterName: talos
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
        kind: MetalMachineTemplate
        name: talos-workers
      version: v1.26.1
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MetalMachineTemplate
metadata:
  name: talos-workers
spec:
  template:
    spec:
      serverClassRef:
        apiVersion: metal.sidero.dev/v1alpha1
        kind: ServerClass
        name: talos-workers
japtain-cack commented 1 year ago

and my control plane server class:

apiVersion: metal.sidero.dev/v1alpha1
kind: ServerClass
metadata:
  name: talos-masters
spec:
  selector:
    matchLabels:
      node-role.kubernetes.io/control-plane: "true"
  qualifiers:
    systemInformation:
      - manufacturer: VMware, Inc.
      - manufacturer: QEMU
  configPatches:
    - op: replace
      path: /machine/install/disk
      value: /dev/sda
  bootFromDiskMethod: ipxe-exit
smira commented 1 year ago

This time the error is different, and it feels like the SideroLink part is not working (?). The node is up, but it's not being bootstrapped. You might need to check that in the management cluster Machine resource gets Address set in the status.

japtain-cack commented 1 year ago

Looks like the address is set to master-01. Not sure if that's an issue, but the hostname should be master-01.talos.mimir-tech.org.

kubectl -n sidero-system get machine
NAME             CLUSTER   NODENAME   PROVIDERID                                      PHASE          AGE   VERSION
talos-cp-4z6sc   talos                sidero://b80c6841-fb72-4fa2-9fec-7bc8eb7faab3   Provisioned    34h   v1.26.1
talos-cp-prp2q   talos                                                                Provisioning   20h   v1.26.1
talos-cp-z8c4w   talos                                                                Provisioning   35h   v1.26.1
kubectl -n sidero-system describe machine talos-cp-4z6sc
Name:         talos-cp-4z6sc
Namespace:    sidero-system
Labels:       cluster.x-k8s.io/cluster-name=talos
              cluster.x-k8s.io/control-plane=
Annotations:  <none>
API Version:  cluster.x-k8s.io/v1beta1
Kind:         Machine
Metadata:
  Creation Timestamp:  2023-01-24T07:33:53Z
  Finalizers:
    machine.cluster.x-k8s.io
  Generation:  3
  Managed Fields:
    API Version:  cluster.x-k8s.io/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"machine.cluster.x-k8s.io":
        f:labels:
          .:
          f:cluster.x-k8s.io/cluster-name:
          f:cluster.x-k8s.io/control-plane:
        f:ownerReferences:
          .:
          k:{"uid":"1d457b6a-266b-414a-b6ae-c85b5dcb140c"}:
      f:spec:
        .:
        f:bootstrap:
          .:
          f:configRef:
          f:dataSecretName:
        f:clusterName:
        f:infrastructureRef:
        f:providerID:
        f:version:
    Manager:      manager
    Operation:    Update
    Time:         2023-01-24T21:44:16Z
    API Version:  cluster.x-k8s.io/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:addresses:
        f:bootstrapReady:
        f:conditions:
        f:infrastructureReady:
        f:lastUpdated:
        f:observedGeneration:
        f:phase:
    Manager:      manager
    Operation:    Update
    Subresource:  status
    Time:         2023-01-24T21:44:59Z
  Owner References:
    API Version:           controlplane.cluster.x-k8s.io/v1alpha3
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  TalosControlPlane
    Name:                  talos-cp
    UID:                   1d457b6a-266b-414a-b6ae-c85b5dcb140c
  Resource Version:        212719
  UID:                     432247ca-989d-4ffe-aa48-6e83a8deeb31
Spec:
  Bootstrap:
    Config Ref:
      API Version:     bootstrap.cluster.x-k8s.io/v1alpha3
      Kind:            TalosConfig
      Name:            talos-cp-zzn4t
      Namespace:       sidero-system
      UID:             9e534710-d57a-49ff-b7db-0124909194da
    Data Secret Name:  talos-cp-4z6sc-bootstrap-data
  Cluster Name:        talos
  Infrastructure Ref:
    API Version:          infrastructure.cluster.x-k8s.io/v1alpha3
    Kind:                 MetalMachine
    Name:                 talos-cp-8p2lx
    Namespace:            sidero-system
    UID:                  3b364f9c-3fad-4688-bd16-98554b6b917d
  Node Deletion Timeout:  10s
  Provider ID:            sidero://b80c6841-fb72-4fa2-9fec-7bc8eb7faab3
  Version:                v1.26.1
Status:
  Addresses:
    Address:        master-01
    Type:           Hostname
  Bootstrap Ready:  true
  Conditions:
    Last Transition Time:  2023-01-24T21:44:16Z
    Status:                True
    Type:                  Ready
    Last Transition Time:  2023-01-24T07:33:53Z
    Status:                True
    Type:                  BootstrapReady
    Last Transition Time:  2023-01-24T21:44:16Z
    Status:                True
    Type:                  InfrastructureReady
    Last Transition Time:  2023-01-24T07:33:53Z
    Reason:                WaitingForNodeRef
    Severity:              Info
    Status:                False
    Type:                  NodeHealthy
  Infrastructure Ready:    true
  Last Updated:            2023-01-24T21:44:16Z
  Observed Generation:     2
  Phase:                   Provisioned
Events:                    <none>
smira commented 1 year ago

Yes, there's no IP there, so SideroLink setup has issues on your side.

https://www.sidero.dev/v0.5/getting-started/expose-services/

https://www.sidero.dev/v0.5/overview/siderolink/

japtain-cack commented 1 year ago

Ok, at least that gives me an idea of what is going on. I did go through that doc on exposing the service. All of these machines are on the same network, and I followed the section regarding exposing the services. But I enabled export SIDERO_CONTROLLER_MANAGER_HOST_NETWORK=true, so I believe I can skip that section according to the note on the expose-services page.

If you built your cluster as specified in the [Prerequisite: Kubernetes] section in this tutorial, your services are already exposed and you can skip this section.

I can't really do a port check for UDP, but the TCP ports seem to be responding properly, so I have no reason to believe the UDP ports aren't working.

docker ps
CONTAINER ID        IMAGE                             COMMAND             CREATED             STATUS              PORTS                                                                                                                    NAMES
cccc5690bc0a        moby/buildkit:buildx-stable-1     "buildkitd"         11 hours ago        Up 11 hours                                                                                                                                  buildx_buildkit_multiarch
47dd65d7eb41        ghcr.io/siderolabs/talos:v1.3.2   "/sbin/init"        36 hours ago        Up 36 hours         0.0.0.0:6443->6443/tcp, 0.0.0.0:8081->8081/tcp, 0.0.0.0:69->69/udp, 0.0.0.0:51821->51821/udp, 0.0.0.0:50000->50000/tcp   sidero-demo-controlplane-1
1d349a0765eb        containrrr/watchtower:latest      "/watchtower"       2 months ago        Up 2 months         8080/tcp                                   
nc -zv demo.talos.mimir-tech.org 8081
Connection to demo.talos.mimir-tech.org 8081 port [tcp/tproxy] succeeded!

nc -zv demo.talos.mimir-tech.org 50000
Connection to demo.talos.mimir-tech.org 50000 port [tcp/*] succeeded!

 nc -zv demo.talos.mimir-tech.org 6443
Connection to demo.talos.mimir-tech.org 6443 port [tcp/*] succeeded!
curl -I http://demo.talos.mimir-tech.org:8081/tftp/ipxe.efi
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 1047552
Content-Type: application/octet-stream
Last-Modified: Tue, 24 Jan 2023 09:05:13 GMT
Date: Wed, 25 Jan 2023 18:36:30 GMT
sudo iptables -L
Alias tip: _ iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
           tcp  --  anywhere             anywhere             state NEW tcp dpts:60001:60010
           tcp  --  anywhere             anywhere             state NEW tcp dpt:ssh
           tcp  --  anywhere             anywhere             state NEW tcp dpts:4505:4506

Chain FORWARD (policy DROP)
target     prot opt source               destination
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (2 references)
target     prot opt source               destination
ACCEPT     udp  --  anywhere             10.5.0.2             udp dpt:51821
ACCEPT     tcp  --  anywhere             10.5.0.2             tcp dpt:50000
ACCEPT     tcp  --  anywhere             10.5.0.2             tcp dpt:tproxy
ACCEPT     tcp  --  anywhere             10.5.0.2             tcp dpt:6443
ACCEPT     udp  --  anywhere             10.5.0.2             udp dpt:tftp

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
DROP       all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere
japtain-cack commented 1 year ago

Full boot log from master01. I think IPv6 is unhappy. Not sure why, I don't have public IPv6, but it should work inside the network just fine.

Also, I see it's trying to use 8.8.8.8 , and I specifically set the nameservers. Any idea why it's not respecting that?

Booting the kernel.
[    0.000000] Linux version 5.15.83-talos (@buildkitsandbox) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.39) #1 SMP Wed Dec 14 16:48:31 UTC 2022
[    0.000000] Command line: vmlinuz console=tty0 console=ttyS0 consoleblank=0 earlyprintk=ttyS0 ima_appraise=fix ima_hash=sha512 ima_template=ima-ng init_on_alloc=1 initrd=initramfs.xz nvme_core.io_timeout=4294967295 printk.devkmsg=on pti=on slab_nomerge= talos.platform=metal talos.config=http://demo.talos.mimir-tech.org:8081/configdata?uuid= siderolink.api=demo.talos.mimir-tech.org:8081 talos.logging.kernel=tcp://[fd28:1f64:7804:c703::1]:4001 talos.events.sink=[fd28:1f64:7804:c703::1]:4002
[    0.000000] x86/fpu: x87 FPU will use FXSAVE
[    0.000000] signal: max sigframe size: 1440
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000000805fff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000806000-0x0000000000807fff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000000808000-0x000000000080ffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000000900000-0x000000007eed7fff] usable
[    0.000000] BIOS-e820: [mem 0x000000007eed8000-0x000000007efd9fff] reserved
[    0.000000] BIOS-e820: [mem 0x000000007efda000-0x000000007f8edfff] usable
[    0.000000] BIOS-e820: [mem 0x000000007f8ee000-0x000000007fb6dfff] reserved
[    0.000000] BIOS-e820: [mem 0x000000007fb6e000-0x000000007fb7dfff] ACPI data
[    0.000000] BIOS-e820: [mem 0x000000007fb7e000-0x000000007fbfdfff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x000000007fbfe000-0x000000007feebfff] usable
[    0.000000] BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved
[    0.000000] BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved
[    0.000000] printk: bootconsole [earlyser0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] extended physical RAM map:
[    0.000000] reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable
[    0.000000] reserve setup_data: [mem 0x0000000000100000-0x0000000000805fff] usable
[    0.000000] reserve setup_data: [mem 0x0000000000806000-0x0000000000807fff] ACPI NVS
[    0.000000] reserve setup_data: [mem 0x0000000000808000-0x000000000080ffff] usable
[    0.000000] reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS
[    0.000000] reserve setup_data: [mem 0x0000000000900000-0x000000007e627017] usable
[    0.000000] reserve setup_data: [mem 0x000000007e627018-0x000000007e64e457] usable
[    0.000000] reserve setup_data: [mem 0x000000007e64e458-0x000000007e783017] usable
[    0.000000] reserve setup_data: [mem 0x000000007e783018-0x000000007e78ca57] usable
[    0.000000] reserve setup_data: [mem 0x000000007e78ca58-0x000000007eed7fff] usable
[    0.000000] reserve setup_data: [mem 0x000000007eed8000-0x000000007efd9fff] reserved
[    0.000000] reserve setup_data: [mem 0x000000007efda000-0x000000007f8edfff] usable
[    0.000000] reserve setup_data: [mem 0x000000007f8ee000-0x000000007fb6dfff] reserved
[    0.000000] reserve setup_data: [mem 0x000000007fb6e000-0x000000007fb7dfff] ACPI data
[    0.000000] reserve setup_data: [mem 0x000000007fb7e000-0x000000007fbfdfff] ACPI NVS
[    0.000000] reserve setup_data: [mem 0x000000007fbfe000-0x000000007feebfff] usable
[    0.000000] reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved
[    0.000000] reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS
[    0.000000] reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved
[    0.000000] efi: EFI v2.70 by EDK II
[    0.000000] efi: SMBIOS=0x7f922000 ACPI=0x7fb7d000 ACPI 2.0=0x7fb7d014 MEMATTR=0x7e89d018
[    0.000000] SMBIOS 2.8 present.
[    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
[    0.000000] Hypervisor detected: KVM
[    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[    0.000000] kvm-clock: cpu 0, msr 45cc6001, primary cpu clock
[    0.000004] kvm-clock: using sched offset of 77975516960898 cycles
[    0.000668] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[    0.002596] tsc: Detected 1607.997 MHz processor
[    0.003232] last_pfn = 0x7feec max_arch_pfn = 0x400000000
[    0.003899] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WC  UC- UC
Memory KASLR using RDTSC...
[    0.019593] Secure boot disabled
[    0.020011] RAMDISK: [mem 0x706ff000-0x73b97fff]
[    0.020535] ACPI: Early table checksum verification disabled
[    0.021262] ACPI: RSDP 0x000000007FB7D014 000024 (v02 BOCHS )
[    0.021940] ACPI: XSDT 0x000000007FB7C0E8 000054 (v01 BOCHS  BXPC     00000001      01000013)
[    0.022965] ACPI: FACP 0x000000007FB78000 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[    0.023985] ACPI: DSDT 0x000000007FB79000 002C41 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[    0.024956] ACPI: FACS 0x000000007FBDB000 000040
[    0.025499] ACPI: APIC 0x000000007FB77000 000080 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[    0.026479] ACPI: SSDT 0x000000007FB76000 0000CA (v01 BOCHS  VMGENID  00000001 BXPC 00000001)
[    0.027459] ACPI: HPET 0x000000007FB75000 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[    0.028441] ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[    0.029433] ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL  EDK2     00000002      01000013)
[    0.030410] ACPI: Reserving FACP table memory at [mem 0x7fb78000-0x7fb78073]
[    0.031222] ACPI: Reserving DSDT table memory at [mem 0x7fb79000-0x7fb7bc40]
[    0.032025] ACPI: Reserving FACS table memory at [mem 0x7fbdb000-0x7fbdb03f]
[    0.032828] ACPI: Reserving APIC table memory at [mem 0x7fb77000-0x7fb7707f]
[    0.033655] ACPI: Reserving SSDT table memory at [mem 0x7fb76000-0x7fb760c9]
[    0.034461] ACPI: Reserving HPET table memory at [mem 0x7fb75000-0x7fb75037]
[    0.035263] ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027]
[    0.036061] ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037]
[    0.037127] No NUMA configuration found
[    0.037577] Faking a node at [mem 0x0000000000000000-0x000000007feebfff]
[    0.038354] NODE_DATA(0) allocated [mem 0x7fe80000-0x7fe83fff]
[    0.039107] Zone ranges:
[    0.039406]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.040109]   DMA32    [mem 0x0000000001000000-0x000000007feebfff]
[    0.040819]   Normal   empty
[    0.041148] Movable zone start for each node
[    0.041635] Early memory node ranges
[    0.042041]   node   0: [mem 0x0000000000001000-0x000000000009ffff]
[    0.042762]   node   0: [mem 0x0000000000100000-0x0000000000805fff]
[    0.043481]   node   0: [mem 0x0000000000808000-0x000000000080ffff]
[    0.044198]   node   0: [mem 0x0000000000900000-0x000000007eed7fff]
[    0.044918]   node   0: [mem 0x000000007efda000-0x000000007f8edfff]
[    0.045643]   node   0: [mem 0x000000007fbfe000-0x000000007feebfff]
[    0.046365] Initmem setup node 0 [mem 0x0000000000001000-0x000000007feebfff]
[    0.047187] On node 0, zone DMA: 1 pages in unavailable ranges
[    0.047252] On node 0, zone DMA: 96 pages in unavailable ranges
[    0.047919] On node 0, zone DMA: 2 pages in unavailable ranges
[    0.048657] On node 0, zone DMA: 240 pages in unavailable ranges
[    0.061517] On node 0, zone DMA32: 258 pages in unavailable ranges
[    0.062252] On node 0, zone DMA32: 784 pages in unavailable ranges
[    0.062969] On node 0, zone DMA32: 276 pages in unavailable ranges
[    0.064268] ACPI: PM-Timer IO Port: 0xb008
[    0.065474] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[    0.066177] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[    0.066972] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.067698] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[    0.068457] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.069240] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[    0.070019] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[    0.070804] ACPI: Using ACPI (MADT) for SMP configuration information
[    0.071543] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.072188] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.072850] [mem 0x80000000-0xffbfffff] available for PCI devices
[    0.073561] Booting paravirtualized kernel on KVM
[    0.074120] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
[    0.085923] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
[    0.087754] percpu: Embedded 55 pages/cpu s185808 r8192 d31280 u1048576
[    0.088568] kvm-guest: stealtime: cpu 0, msr 7e41c100
[    0.089183] Built 1 zonelists, mobility grouping on.  Total pages: 512497
[    0.089982] Policy zone: DMA32
[    0.090338] Kernel command line: vmlinuz console=tty0 console=ttyS0 consoleblank=0 earlyprintk=ttyS0 ima_appraise=fix ima_hash=sha512 ima_template=ima-ng init_on_alloc=1 initrd=initramfs.xz nvme_core.io_timeout=4294967295 printk.devkmsg=on pti=on slab_nomerge= talos.platform=metal talos.config=http://demo.talos.mimir-tech.org:8081/configdata?uuid= siderolink.api=demo.talos.mimir-tech.org:8081 talos.logging.kernel=tcp://[fd28:1f64:7804:c703::1]:4001 talos.events.sink=[fd28:1f64:7804:c703::1]:4002
[    0.096054] Unknown kernel command line parameters "vmlinuz pti=on", will be passed to user space.
[    0.097682] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[    0.098644] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.099606] mem auto-init: stack:byref_all(zero), heap alloc:on, heap free:off
[    0.105886] Memory: 1821488K/2090524K available (32793K kernel code, 4067K rwdata, 19684K rodata, 3212K init, 1400K bss, 268776K reserved, 0K cma-reserved)
[    0.107753] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.108506] Kernel/User page tables isolation: enabled
[    0.109108] ftrace: allocating 92043 entries in 360 pages
[    0.158487] ftrace: allocated 360 pages with 4 groups
[    0.159117] rcu: Hierarchical RCU implementation.
[    0.159492] rcu:     RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
[    0.160007]  Rude variant of Tasks RCU enabled.
[    0.160347]  Tracing variant of Tasks RCU enabled.
[    0.160715] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[    0.161294] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
[    0.167582] NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
[    0.168464] Console: colour dummy device 80x25
[    0.169180] printk: console [tty0] enabled
[    0.169694] printk: console [ttyS0] enabled
[    0.169694] printk: console [ttyS0] enabled
[    0.170698] printk: bootconsole [earlyser0] disabled
[    0.170698] printk: bootconsole [earlyser0] disabled
[    0.171915] ACPI: Core revision 20210730
[    0.172525] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
[    0.173712] APIC: Switch to symmetric I/O mode setup
[    0.174475] x2apic enabled
[    0.174967] Switched APIC routing to physical x2apic.
[    0.176273] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.176989] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x172da757658, max_idle_ns: 440795229495 ns
[    0.178251] Calibrating delay loop (skipped) preset value.. 3215.99 BogoMIPS (lpj=6431988)
[    0.179215] pid_max: default: 32768 minimum: 301
[    0.183090] LSM: Security Framework initializing
[    0.183635] Yama: becoming mindful.
[    0.184152] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
[    0.185045] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Poking KASLR using RDTSC...
[    0.186747] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[    0.187364] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
[    0.188061] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    0.189056] Spectre V2 : Mitigation: Retpolines
[    0.189589] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    0.190249] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
[    0.191034] Speculative Store Bypass: Vulnerable
[    0.191580] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
[    0.192336] MMIO Stale Data: Unknown: No mitigations
[    0.215625] Freeing SMP alternatives memory: 76K
[    0.324314] smpboot: CPU0: Intel Common KVM processor (family: 0xf, model: 0x6, stepping: 0x1)
[    0.325381] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.326245] rcu: Hierarchical SRCU implementation.
[    0.326245] smp: Bringing up secondary CPUs ...
[    0.326323] x86: Booting SMP configuration:
[    0.326824] .... node  #0, CPUs:      #1
[    0.005822] kvm-clock: cpu 1, msr 45cc6041, secondary cpu clock
[    0.005822] smpboot: CPU 1 Converting physical 0 to logical die 1
[    0.342271] kvm-guest: stealtime: cpu 1, msr 7e51c100
[    0.342674] smp: Brought up 1 node, 2 CPUs
[    0.344778] smpboot: Max logical packages: 2
[    0.346250] smpboot: Total of 2 processors activated (6431.98 BogoMIPS)
[    0.347486] devtmpfs: initialized
[    0.347486] ACPI: PM: Registering ACPI NVS region [mem 0x00806000-0x00807fff] (8192 bytes)
[    0.350249] ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes)
[    0.351054] ACPI: PM: Registering ACPI NVS region [mem 0x7fb7e000-0x7fbfdfff] (524288 bytes)
[    0.351816] ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes)
[    0.352725] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[    0.353634] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
[    0.354416] PM: RTC time: 19:18:36, date: 2023-01-25
[    0.355122] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.355930] audit: initializing netlink subsys (disabled)
[    0.356522] audit: type=2000 audit(1674674316.262:1): state=initialized audit_enabled=0 res=1
[    0.356522] thermal_sys: Registered thermal governor 'step_wise'
[    0.356522] thermal_sys: Registered thermal governor 'user_space'
[    0.358280] cpuidle: using governor menu
[    0.359225] ACPI: bus type PCI registered
[    0.359546] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    0.360122] dca service started, version 1.12.1
[    0.360512] PCI: Using configuration type 1 for base access
[    0.364748] Kprobes globally optimized
[    0.365083] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.366290] cryptd: max_cpu_qlen set to 1000
[    0.366607] ACPI: Added _OSI(Module Device)
[    0.366613] ACPI: Added _OSI(Processor Device)
[    0.366949] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.367302] ACPI: Added _OSI(Processor Aggregator Device)
[    0.367708] ACPI: Added _OSI(Linux-Dell-Video)
[    0.368039] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[    0.370256] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[    0.371447] ACPI: 2 ACPI AML tables successfully acquired and loaded
[    0.372331] ACPI: Interpreter enabled
[    0.372624] ACPI: PM: (supports S0 S3 S5)
[    0.372927] ACPI: Using IOAPIC for interrupt routing
[    0.373305] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.374045] ACPI: Enabled 3 GPEs in block 00 to 0F
[    0.376945] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.377419] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
[    0.378008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[    0.378375] acpiphp: Slot [3] registered
[    0.378672] acpiphp: Slot [4] registered
[    0.378997] acpiphp: Slot [5] registered
[    0.379311] acpiphp: Slot [6] registered
[    0.379626] acpiphp: Slot [7] registered
[    0.379942] acpiphp: Slot [8] registered
[    0.380252] acpiphp: Slot [9] registered
[    0.380564] acpiphp: Slot [10] registered
[    0.380893] acpiphp: Slot [11] registered
[    0.381212] acpiphp: Slot [12] registered
[    0.382261] acpiphp: Slot [13] registered
[    0.382572] acpiphp: Slot [14] registered
[    0.382933] acpiphp: Slot [15] registered
[    0.383284] acpiphp: Slot [16] registered
[    0.383658] acpiphp: Slot [17] registered
[    0.384045] acpiphp: Slot [18] registered
[    0.384435] acpiphp: Slot [19] registered
[    0.384761] acpiphp: Slot [20] registered
[    0.385099] acpiphp: Slot [21] registered
[    0.385519] acpiphp: Slot [22] registered
[    0.385936] acpiphp: Slot [23] registered
[    0.386266] acpiphp: Slot [24] registered
[    0.386626] acpiphp: Slot [25] registered
[    0.387051] acpiphp: Slot [26] registered
[    0.387406] acpiphp: Slot [27] registered
[    0.387822] acpiphp: Slot [28] registered
[    0.388244] acpiphp: Slot [29] registered
[    0.388653] PCI host bridge to bus 0000:00
[    0.388997] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    0.389665] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    0.390247] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[    0.390925] pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
[    0.391624] pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window]
[    0.392323] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.392928] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[    0.393914] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[    0.394601] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
[    0.396539] pci 0000:00:01.1: reg 0x20: [io  0xd2c0-0xd2cf]
[    0.397501] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
[    0.398247] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
[    0.398752] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
[    0.399404] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
[    0.400113] pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
[    0.401914] pci 0000:00:01.2: reg 0x20: [io  0xd2a0-0xd2bf]
[    0.402874] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
[    0.403626] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
[    0.404300] pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
[    0.405153] pci 0000:00:02.0: [1234:1111] type 00 class 0x030000
[    0.409575] pci 0000:00:02.0: reg 0x10: [mem 0x80000000-0x80ffffff pref]
[    0.411738] pci 0000:00:02.0: reg 0x18: [mem 0x81442000-0x81442fff]
[    0.414955] pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref]
[    0.415702] pci 0000:00:02.0: BAR 0: assigned to efifb
[    0.416646] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[    0.418384] pci 0000:00:03.0: [1af4:1002] type 00 class 0x00ff00
[    0.419246] pci 0000:00:03.0: reg 0x10: [io  0xd240-0xd27f]
[    0.421006] pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref]
[    0.422545] pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000
[    0.423683] pci 0000:00:05.0: reg 0x10: [io  0xd200-0xd23f]
[    0.424853] pci 0000:00:05.0: reg 0x14: [mem 0x81441000-0x81441fff]
[    0.427793] pci 0000:00:05.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref]
[    0.429475] pci 0000:00:12.0: [1af4:1000] type 00 class 0x020000
[    0.430488] pci 0000:00:12.0: reg 0x10: [io  0xd280-0xd29f]
[    0.431460] pci 0000:00:12.0: reg 0x14: [mem 0x81440000-0x81440fff]
[    0.433431] pci 0000:00:12.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref]
[    0.434508] pci 0000:00:12.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref]
[    0.435893] pci 0000:00:1e.0: [1b36:0001] type 01 class 0x060400
[    0.437209] pci 0000:00:1e.0: reg 0x10: [mem 0x80000d000-0x80000d0ff 64bit]
[    0.439495] pci 0000:00:1f.0: [1b36:0001] type 01 class 0x060400
[    0.440596] pci 0000:00:1f.0: reg 0x10: [mem 0x80000c000-0x80000c0ff 64bit]
[    0.443070] pci_bus 0000:01: extended config space not accessible
[    0.443707] acpiphp: Slot [0] registered
[    0.444090] acpiphp: Slot [1] registered
[    0.444450] acpiphp: Slot [2] registered
[    0.444842] acpiphp: Slot [3-1] registered
[    0.445217] acpiphp: Slot [4-1] registered
[    0.445600] acpiphp: Slot [5-1] registered
[    0.445980] acpiphp: Slot [6-1] registered
[    0.446266] acpiphp: Slot [7-1] registered
[    0.446676] acpiphp: Slot [8-1] registered
[    0.447105] acpiphp: Slot [9-1] registered
[    0.447532] acpiphp: Slot [10-1] registered
[    0.447922] acpiphp: Slot [11-1] registered
[    0.448358] acpiphp: Slot [12-1] registered
[    0.449096] acpiphp: Slot [13-1] registered
[    0.449473] acpiphp: Slot [14-1] registered
[    0.449912] acpiphp: Slot [15-1] registered
[    0.450263] acpiphp: Slot [16-1] registered
[    0.450692] acpiphp: Slot [17-1] registered
[    0.451107] acpiphp: Slot [18-1] registered
[    0.451537] acpiphp: Slot [19-1] registered
[    0.451968] acpiphp: Slot [20-1] registered
[    0.452397] acpiphp: Slot [21-1] registered
[    0.452833] acpiphp: Slot [22-1] registered
[    0.453250] acpiphp: Slot [23-1] registered
[    0.453666] acpiphp: Slot [24-1] registered
[    0.454269] acpiphp: Slot [25-1] registered
[    0.454676] acpiphp: Slot [26-1] registered
[    0.455109] acpiphp: Slot [27-1] registered
[    0.455558] acpiphp: Slot [28-1] registered
[    0.455927] acpiphp: Slot [29-1] registered
[    0.456368] acpiphp: Slot [30] registered
[    0.456792] acpiphp: Slot [31] registered
[    0.457332] pci 0000:00:1e.0: PCI bridge to [bus 01]
[    0.457793] pci 0000:00:1e.0:   bridge window [io  0xd000-0xdfff]
[    0.458256] pci 0000:00:1e.0:   bridge window [mem 0x81200000-0x813fffff]
[    0.460123] pci_bus 0000:02: extended config space not accessible
[    0.460741] acpiphp: Slot [0-1] registered
[    0.461102] acpiphp: Slot [1-1] registered
[    0.461454] acpiphp: Slot [2-1] registered
[    0.461879] acpiphp: Slot [3-2] registered
[    0.462263] acpiphp: Slot [4-2] registered
[    0.462692] acpiphp: Slot [5-2] registered
[    0.463109] acpiphp: Slot [6-2] registered
[    0.463517] acpiphp: Slot [7-2] registered
[    0.463896] acpiphp: Slot [8-2] registered
[    0.464244] acpiphp: Slot [9-2] registered
[    0.464593] acpiphp: Slot [10-2] registered
[    0.464995] acpiphp: Slot [11-2] registered
[    0.465356] acpiphp: Slot [12-2] registered
[    0.465764] acpiphp: Slot [13-2] registered
[    0.466269] acpiphp: Slot [14-2] registered
[    0.466683] acpiphp: Slot [15-2] registered
[    0.467116] acpiphp: Slot [16-2] registered
[    0.467509] acpiphp: Slot [17-2] registered
[    0.467920] acpiphp: Slot [18-2] registered
[    0.468357] acpiphp: Slot [19-2] registered
[    0.468742] acpiphp: Slot [20-2] registered
[    0.469146] acpiphp: Slot [21-2] registered
[    0.469580] acpiphp: Slot [22-2] registered
[    0.469975] acpiphp: Slot [23-2] registered
[    0.470268] acpiphp: Slot [24-2] registered
[    0.470644] acpiphp: Slot [25-2] registered
[    0.471047] acpiphp: Slot [26-2] registered
[    0.471446] acpiphp: Slot [27-2] registered
[    0.471864] acpiphp: Slot [28-2] registered
[    0.472228] acpiphp: Slot [29-2] registered
[    0.472658] acpiphp: Slot [30-1] registered
[    0.473098] acpiphp: Slot [31-1] registered
[    0.473526] pci 0000:00:1f.0: PCI bridge to [bus 02]
[    0.473947] pci 0000:00:1f.0:   bridge window [io  0xc000-0xcfff]
[    0.474257] pci 0000:00:1f.0:   bridge window [mem 0x81000000-0x811fffff]
[    0.476018] ACPI: PCI: Interrupt link LNKA configured for IRQ 10
[    0.476645] ACPI: PCI: Interrupt link LNKB configured for IRQ 10
[    0.477299] ACPI: PCI: Interrupt link LNKC configured for IRQ 11
[    0.478302] ACPI: PCI: Interrupt link LNKD configured for IRQ 11
[    0.478859] ACPI: PCI: Interrupt link LNKS configured for IRQ 9
[    0.479947] iommu: Default domain type: Translated
[    0.479947] iommu: DMA domain TLB invalidation policy: lazy mode
[    0.479947] pci 0000:00:02.0: vgaarb: setting as boot VGA device
[    0.479947] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[    0.482247] pci 0000:00:02.0: vgaarb: bridge control possible
[    0.482726] vgaarb: loaded
[    0.483203] SCSI subsystem initialized
[    0.483613] ACPI: bus type USB registered
[    0.483613] usbcore: registered new interface driver usbfs
[    0.483613] usbcore: registered new interface driver hub
[    0.483704] usbcore: registered new device driver usb
[    0.484170] pps_core: LinuxPPS API ver. 1 registered
[    0.486246] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[    0.487093] PTP clock support registered
[    0.487536] EDAC MC: Ver: 3.0.0
[    0.487536] Registered efivars operations
[    0.487536] NET: Registered PF_ATMPVC protocol family
[    0.487536] NET: Registered PF_ATMSVC protocol family
[    0.487947] NetLabel: Initializing
[    0.490247] NetLabel:  domain hash size = 128
[    0.490675] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
[    0.491202] NetLabel:  unlabeled traffic allowed by default
[    0.491777] PCI: Using ACPI for IRQ routing
[    0.491777] pci 0000:00:01.1: can't claim BAR 4 [io  0xd2c0-0xd2cf]: address conflict with PCI Bus 0000:01 [io  0xd000-0xdfff]
[    0.491829] pci 0000:00:01.2: can't claim BAR 4 [io  0xd2a0-0xd2bf]: address conflict with PCI Bus 0000:01 [io  0xd000-0xdfff]
[    0.494269] pci 0000:00:03.0: can't claim BAR 0 [io  0xd240-0xd27f]: address conflict with PCI Bus 0000:01 [io  0xd000-0xdfff]
[    0.495278] pci 0000:00:05.0: can't claim BAR 0 [io  0xd200-0xd23f]: address conflict with PCI Bus 0000:01 [io  0xd000-0xdfff]
[    0.496348] pci 0000:00:12.0: can't claim BAR 0 [io  0xd280-0xd29f]: address conflict with PCI Bus 0000:01 [io  0xd000-0xdfff]
[    0.498310] hpet: 3 channels of 0 reserved for per-cpu timers
[    0.498815] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[    0.499308] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
[    0.502357] clocksource: Switched to clocksource kvm-clock
[    0.660717] VFS: Disk quotas dquot_6.6.0
[    0.661191] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.662190] pnp: PnP ACPI init
[    0.662823] pnp: PnP ACPI: found 5 devices
[    0.674265] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[    0.675113] NET: Registered PF_INET protocol family
[    0.675706] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.677690] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
[    0.678454] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
[    0.679188] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
[    0.679952] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
[    0.680540] TCP: Hash tables configured (established 16384 bind 16384)
[    0.681140] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
[    0.681709] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
[    0.682355] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    0.683083] RPC: Registered named UNIX socket transport module.
[    0.683500] RPC: Registered udp transport module.
[    0.683863] RPC: Registered tcp transport module.
[    0.684248] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.684933] pci 0000:00:12.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window
[    0.685748] pci 0000:00:12.0: BAR 6: assigned [mem 0x81400000-0x8143ffff pref]
[    0.686404] pci 0000:00:03.0: BAR 0: assigned [io  0x1000-0x103f]
[    0.687241] pci 0000:00:05.0: BAR 0: assigned [io  0x1040-0x107f]
[    0.688001] pci 0000:00:01.2: BAR 4: assigned [io  0x1080-0x109f]
[    0.688773] pci 0000:00:12.0: BAR 0: assigned [io  0x10a0-0x10bf]
[    0.689528] pci 0000:00:01.1: BAR 4: assigned [io  0x10c0-0x10cf]
[    0.690256] pci 0000:00:1e.0: PCI bridge to [bus 01]
[    0.690678] pci 0000:00:1e.0:   bridge window [io  0xd000-0xdfff]
[    0.691628] pci 0000:00:1e.0:   bridge window [mem 0x81200000-0x813fffff]
[    0.692970] pci 0000:00:1f.0: PCI bridge to [bus 02]
[    0.693377] pci 0000:00:1f.0:   bridge window [io  0xc000-0xcfff]
[    0.694267] pci 0000:00:1f.0:   bridge window [mem 0x81000000-0x811fffff]
[    0.695666] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
[    0.696246] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
[    0.696701] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[    0.697656] pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
[    0.698243] pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window]
[    0.698837] pci_bus 0000:01: resource 0 [io  0xd000-0xdfff]
[    0.699277] pci_bus 0000:01: resource 1 [mem 0x81200000-0x813fffff]
[    0.699768] pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
[    0.700241] pci_bus 0000:02: resource 1 [mem 0x81000000-0x811fffff]
[    0.700754] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    0.701300] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    0.701761] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[    0.713712] ACPI: \_SB_.LNKD: Enabled at IRQ 11
[    0.725349] pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 22494 usecs
[    0.726050] PCI: CLS 0 bytes, default 64
[    0.726516] Unpacking initramfs...
[    0.726678] kvm: no hardware support
[    0.727259] has_svm: not amd or hygon
[    0.727626] kvm: no hardware support
[    0.728028] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x172da757658, max_idle_ns: 440795229495 ns
[    0.750773] Initialise system trusted keyrings
[    0.758295] workingset: timestamp_bits=40 max_order=19 bucket_order=0
[    0.761550] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[    0.762581] NFS: Registering the id_resolver key type
[    0.763137] Key type id_resolver registered
[    0.763587] Key type id_legacy registered
[    0.764034] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[    0.764843] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering...
[    0.778482] Key type cifs.spnego registered
[    0.778968] Key type cifs.idmap registered
[    0.779472] fuse: init (API version 7.34)
[    0.780069] SGI XFS with ACLs, security attributes, quota, no debug enabled
[    0.781618] ceph: loaded (mds proto 32)
[    0.787759] NET: Registered PF_ALG protocol family
[    0.788275] Key type asymmetric registered
[    0.788711] Asymmetric key parser 'x509' registered
[    0.789279] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
[    0.790128] io scheduler mq-deadline registered
[    0.790623] io scheduler kyber registered
[    0.791466] IPMI message handler: version 39.2
[    0.791960] ipmi device interface
[    0.792421] ipmi_si: IPMI System Interface driver
[    0.793004] ipmi_si: Unable to find any System Interface(s)
[    0.793611] IPMI poweroff: Copyright (C) 2004 MontaVista Software - IPMI Powerdown via sys_reboot
[    0.794861] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[    0.795753] ACPI: button: Power Button [PWRF]
[    0.796801] ioatdma: Intel(R) QuickData Technology Driver 5.00
[    0.819078] ACPI: \_SB_.LNKC: Enabled at IRQ 10
[    0.840829] ACPI: \_SB_.LNKA: Enabled at IRQ 10
[    0.853699] ACPI: \_SB_.LNKB: Enabled at IRQ 11
[    0.854975] Free page reporting enabled
[    0.855524] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    0.856035] 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    0.857372] Non-volatile memory driver v1.3
[    0.857712] Linux agpgart interface v0.103
[    0.858349] [drm] amdgpu kernel modesetting enabled.
[    0.860642] loop: module loaded
[    0.861057] rbd: loaded (major 252)
[    0.861337] Guest personality initialized and is inactive
[    0.861863] VMCI host device registered (name=vmci, major=10, minor=126)
[    0.862338] Initialized host personality
[    0.862639] Loading iSCSI transport class v2.0-870.
[    0.863413] iscsi: registered transport (tcp)
[    0.863721] Adaptec aacraid driver 1.2.1[50983]-custom
[    0.864091] isci: Intel(R) C600 SAS Controller Driver - version 1.2.0
[    0.864559] Microchip SmartPQI Driver (v2.1.10-020)
[    0.864926] megasas: 07.717.02.00-rc1
[    0.865202] mpt3sas version 39.100.00.00 loaded
[    0.866757] scsi host0: Virtio SCSI HBA
[    0.867728] scsi 0:0:0:0: Direct-Access     QEMU     QEMU HARDDISK    2.5+ PQ: 0 ANSI: 5
[    0.878819] VMware PVSCSI driver - version 1.0.7.0-k
[    0.879223] hv_vmbus: registering driver hv_storvsc
[    0.879699] sd 0:0:0:0: Power-on or device reset occurred
[    0.880403] sd 0:0:0:0: [sda] 104857600 512-byte logical blocks: (53.7 GB/50.0 GiB)
[    0.880951] sd 0:0:0:0: [sda] Write Protect is off
[    0.881369] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    0.890205] sd 0:0:0:0: [sda] Attached SCSI disk
[    0.890921] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    0.899539] scsi host1: ata_piix
[    0.899912] scsi host2: ata_piix
[    0.900182] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0x10c0 irq 14
[    0.900647] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0x10c8 irq 15
[    0.902184] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
[    0.902742] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
[    0.903702] tun: Universal TUN/TAP device driver, 1.6
[    0.905755] e100: Intel(R) PRO/100 Network Driver
[    0.906090] e100: Copyright(c) 1999-2006 Intel Corporation
[    0.906516] e1000: Intel(R) PRO/1000 Network Driver
[    0.906868] e1000: Copyright (c) 1999-2006 Intel Corporation.
[    0.907270] e1000e: Intel(R) PRO/1000 Network Driver
[    0.907613] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[    0.908044] igb: Intel(R) Gigabit Ethernet Network Driver
[    0.908426] igb: Copyright (c) 2007-2014 Intel Corporation.
[    0.908823] Intel(R) 2.5G Ethernet Linux Driver
[    0.909139] Copyright(c) 2018 Intel Corporation.
[    0.909466] igbvf: Intel(R) Gigabit Virtual Function Network Driver
[    0.909899] igbvf: Copyright (c) 2009 - 2012 Intel Corporation.
[    0.910459] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[    0.910890] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[    0.911411] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver
[    0.911943] ixgbevf: Copyright (c) 2009 - 2018 Intel Corporation.
[    0.912447] i40e: Intel(R) Ethernet Connection XL710 Network Driver
[    0.912880] i40e: Copyright (c) 2013 - 2019 Intel Corporation.
[    0.913423] ixgb: Intel(R) PRO/10GbE Network Driver
[    0.913764] ixgb: Copyright (c) 1999-2008 Intel Corporation.
[    0.914174] iavf: Intel(R) Ethernet Adaptive Virtual Function Network Driver
[    0.914667] Copyright (c) 2013 - 2018 Intel Corporation.
[    0.915144] ice: Intel(R) Ethernet Connection E800 Series Linux Driver
[    0.915591] ice: Copyright (c) 2018, Intel Corporation.
[    0.916065] sky2: driver version 1.30
[    0.918392] QLogic 1/10 GbE Converged/Intelligent Ethernet Driver v5.3.66
[    0.918895] QLogic/NetXen Network Driver v4.0.82
[    0.919230] QLogic FastLinQ 4xxxx Core Module qed
[    0.919558] qede init: QLogic FastLinQ 4xxxx Ethernet Driver qede
[    0.920022] VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI
[    0.920495] usbcore: registered new interface driver r8152
[    0.920881] hv_vmbus: registering driver hv_netvsc
[    0.921215] Fusion MPT base driver 3.04.20
[    0.921499] Copyright (c) 1999-2008 LSI Corporation
[    0.921841] Fusion MPT SAS Host driver 3.04.20
[    0.922169] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.922634] ehci-pci: EHCI PCI platform driver
[    0.923449] usbcore: registered new interface driver cdc_acm
[    0.923847] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
[    0.924425] usbcore: registered new interface driver usb-storage
[    0.924882] usbcore: registered new interface driver ch341
[    0.925313] usbserial: USB Serial support registered for ch341-uart
[    0.925873] usbcore: registered new interface driver cp210x
[    0.926434] usbserial: USB Serial support registered for cp210x
[    0.926946] usbcore: registered new interface driver ftdi_sio
[    0.927429] usbserial: USB Serial support registered for FTDI USB Serial Device
[    0.928089] usbcore: registered new interface driver pl2303
[    0.928621] usbserial: USB Serial support registered for pl2303
[    0.929245] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[    0.930443] serio: i8042 KBD port at 0x60,0x64 irq 1
[    0.930813] serio: i8042 AUX port at 0x60,0x64 irq 12
[    0.931294] hv_vmbus: registering driver hyperv_keyboard
[    0.931858] mousedev: PS/2 mouse device common for all mice
[    0.932549] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
[    0.933782] rtc_cmos 00:04: RTC can wake from S4
[    0.935034] rtc_cmos 00:04: registered as rtc0
[    0.935468] rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs
[    0.936469] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[    0.937646] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
[    0.938517] intel_pstate: CPU model not supported
[    0.938964] sdhci: Secure Digital Host Controller Interface driver
[    0.939497] sdhci: Copyright(c) Pierre Ossman
[    0.939869] sdhci-pltfm: SDHCI platform and OF driver helper
[    0.942305] efifb: probing for efifb
[    0.942609] efifb: framebuffer at 0x80000000, using 3072k, total 3072k
[    0.943135] efifb: mode is 1024x768x32, linelength=4096, pages=1
[    0.943639] efifb: scrolling: redraw
[    0.943903] efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
[    0.946486] Console: switching to colour frame buffer device 128x48
[    0.948025] fb0: EFI VGA frame buffer device
[    0.948402] EFI Variables Facility v0.08 2004-May-17
[    0.950315] hid: raw HID events driver (C) Jiri Kosina
[    0.951454] usbcore: registered new interface driver usbhid
[    0.951932] usbhid: USB HID core driver
[    0.952508] hv_utils: Registering HyperV Utility Driver
[    0.952973] hv_vmbus: registering driver hv_utils
[    0.953435] NET: Registered PF_LLC protocol family
[    0.953856] GACT probability NOT on
[    0.954195] Mirror/redirect action on
[    0.954566] Simple TC action Loaded
[    0.955078] netem: version 1.3
[    0.955333] u32 classifier
[    0.955568]     input device check on
[    0.955871]     Actions configured
[    0.956955] xt_time: kernel timezone is -0000
[    0.957372] IPVS: Registered protocols (TCP, UDP)
[    0.957808] IPVS: Connection hash table configured (size=4096, memory=32Kbytes)
[    0.958871] IPVS: ipvs loaded.
[    0.959507] IPVS: [rr] scheduler registered.
[    0.960228] IPVS: [wrr] scheduler registered.
[    0.961046] IPVS: [lc] scheduler registered.
[    0.961769] IPVS: [sh] scheduler registered.
[    0.962524] ipip: IPv4 and MPLS over IPv4 tunneling driver
[    0.963547] gre: GRE over IPv4 demultiplexor driver
[    0.964315] Initializing XFRM netlink socket
[    0.965154] NET: Registered PF_INET6 protocol family
[    0.966327] Segment Routing with IPv6
[    0.967018] In-situ OAM (IOAM) with IPv6
[    0.967725] mip6: Mobile IPv6
[    0.968399] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[    0.969563] NET: Registered PF_PACKET protocol family
[    0.970424] Bridge firewalling registered
[    0.971181] NET: Registered PF_APPLETALK protocol family
[    0.971985] NET: Registered PF_X25 protocol family
[    0.972719] X25: Linux Version 0.2
[    0.973348] RPC: Registered rdma transport module.
[    0.974035] RPC: Registered rdma backchannel transport module.
[    0.974943] l2tp_core: L2TP core driver, V2.0
[    0.975623] NET4: DECnet for Linux: V.2.5.68s (C) 1995-2003 Linux DECnet Project Team
[    0.976686] DECnet: Routing cache hash table of 1024 buckets, 16Kbytes
[    0.977572] NET: Registered PF_DECnet protocol family
[    0.978400] NET: Registered PF_PHONET protocol family
[    0.979212] 8021q: 802.1Q VLAN Support v1.8
[    0.982673] DCCP: Activated CCID 2 (TCP-like)
[    0.983467] DCCP: Activated CCID 3 (TCP-Friendly Rate Control)
[    0.984398] sctp: Hash tables configured (bind 256/256)
[    0.986283] NET: Registered PF_RDS protocol family
[    1.196672] Freeing initrd memory: 53860K
[    1.200676] NET: Registered PF_IEEE802154 protocol family
[    1.201637] Key type dns_resolver registered
[    1.202453] Key type ceph registered
[    1.203331] libceph: loaded (mon/osd proto 15/24)
[    1.204190] openvswitch: Open vSwitch switching datapath
[    1.205577] NET: Registered PF_VSOCK protocol family
[    1.206736] mpls_gso: MPLS GSO support
[    1.207879] IPI shorthand broadcast: enabled
[    1.208706] sched_clock: Marking stable (1205788089, 1822546)->(1225955918, -18345283)
[    1.210121] registered taskstats version 1
[    1.211090] Loading compiled-in X.509 certificates
[    1.212566] Loaded X.509 cert 'Sidero Labs, Inc.: Build time throw-away kernel key: fde47fcd1b30b3d7e614c0b28aa5ec27aa2443b8'
[    1.214565] ima: No TPM chip found, activating TPM-bypass!
[    1.215582] ima: Allocated hash algorithm: sha512
[    1.216551] ima: No architecture policies found
[    1.217806] PM:   Magic number: 7:136:345
[    1.218676] tty tty62: hash matches
[    1.219606] printk: console [netcon0] enabled
[    1.220511] netconsole: network logging started
[    1.221620] rdma_rxe: loaded
[    1.222387] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[    1.223953] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[    1.226347] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[    1.227745] cfg80211: failed to load regulatory.db
[    1.229800] Freeing unused kernel image (initmem) memory: 3212K
[    1.246310] Write protecting the kernel read-only data: 55296k
[    1.248525] Freeing unused kernel image (text/rodata gap) memory: 2020K
[    1.249922] Freeing unused kernel image (rodata/data gap) memory: 796K
[    1.252917] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    1.254071] x86/mm: Checking user space page tables
[    1.255298] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    1.256479] Run /init as init process
[    1.547155] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
[    3.242294] random: crng init done
[    3.246275] [talos] [initramfs] booting Talos v1.3.0
[    3.247309] [talos] [initramfs] mounting the rootfs
[    3.248430] loop0: detected capacity change from 0 to 95616
[    3.283284] [talos] [initramfs] bind mounting /lib/firmware
[    3.285335] [talos] [initramfs] entering the rootfs
[    3.286358] [talos] [initramfs] moving mounts to the new rootfs
[    3.287635] [talos] [initramfs] changing working directory into /root
[    3.288828] [talos] [initramfs] moving /root to /
[    3.289835] [talos] [initramfs] changing root directory
[    3.290918] [talos] [initramfs] cleaning up initramfs
[    3.292125] [talos] [initramfs] executing /sbin/init
2023/01/25 19:18:40 waiting 1 second(s) for USB storage
2023/01/25 19:18:41 initialize sequence: 5 phase(s)
2023/01/25 19:18:41 phase logger (1/5): 1 tasks(s)
2023/01/25 19:18:41 task setupLogger (1/1): starting
[    5.757554] [talos] task setupLogger (1/1): done, 2.716112ms
[    5.758457] [talos] phase logger (1/5): done, 3.972723ms
[    5.759415] [talos] phase systemRequirements (2/5): 7 tasks(s)
[    5.760344] [talos] task dropCapabilities (7/7): starting
[    5.761627] [talos] task dropCapabilities (7/7): done, 1.29262ms
[    5.762528] [talos] task enforceKSPPRequirements (1/7): starting
[    5.765815] [talos] task setupSystemDirectory (2/7): starting
[    5.766983] [talos] task setupSystemDirectory (2/7): done, 6.17214ms
[    5.768147] [talos] task mountBPFFS (3/7): starting
[    5.769317] [talos] task mountBPFFS (3/7): done, 7.51883ms
[    5.770372] [talos] task mountCgroups (4/7): starting
[    5.771419] [talos] task mountCgroups (4/7): done, 8.907325ms
[    5.772479] [talos] task mountPseudoFilesystems (5/7): starting
[    5.773759] [talos] task mountPseudoFilesystems (5/7): done, 11.23939ms
[    5.774897] [talos] task setRLimit (6/7): starting
[    5.775875] [talos] task setRLimit (6/7): done, 13.351579ms
[    5.804271] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.806928] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.810916] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.813425] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.816917] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "127.0.0.1/8", "link": "lo"}
[    5.819900] [talos] task enforceKSPPRequirements (1/7): done, 58.148778ms
[    5.820979] [talos] phase systemRequirements (2/5): done, 61.564936ms
[    5.821943] [talos] phase integrity (3/5): 1 tasks(s)
[    5.822843] [talos] task writeIMAPolicy (1/1): starting
[    5.823769] audit: type=1807 audit(1674674321.730:2): action=dont_measure fsmagic=0x9fa0 res=1
[    5.823882] ima: policy update completed
[    5.825231] audit: type=1807 audit(1674674321.730:3): action=dont_measure fsmagic=0x62656572 res=1
[    5.827565] audit: type=1807 audit(1674674321.730:4): action=dont_measure fsmagic=0x64626720 res=1
[    5.828499] 8021q: adding VLAN 0 to HW filter on device eth0
[    5.829396] audit: type=1807 audit(1674674321.730:5): action=dont_measure fsmagic=0x1021994 res=1
[    5.831781] [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable"}
[    5.831813] audit: type=1807 audit(1674674321.730:6): action=dont_measure fsmagic=0x1cd1 res=1
[    5.835921] audit: type=1807 audit(1674674321.730:7): action=dont_measure fsmagic=0x42494e4d res=1
[    5.837522] audit: type=1807 audit(1674674321.730:8): action=dont_measure fsmagic=0x73636673 res=1
[    5.839123] audit: type=1807 audit(1674674321.730:9): action=dont_measure fsmagic=0xf97cff8c res=1
[    5.840715] audit: type=1807 audit(1674674321.730:10): action=dont_measure fsmagic=0x43415d53 res=1
[    5.842329] audit: type=1807 audit(1674674321.730:11): action=dont_measure fsmagic=0x27e0eb res=1
[    5.850425] [talos] task writeIMAPolicy (1/1): done, 27.585391ms
[    5.851487] [talos] phase integrity (3/5): done, 29.543551ms
[    5.852633] [talos] phase etc (4/5): 2 tasks(s)
[    5.853639] [talos] task createOSReleaseFile (2/2): starting
[    5.855237] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "10.100.50.111/24", "link": "eth0"}
[    5.858356] [talos] task createOSReleaseFile (2/2): done, 1.697037ms
[    5.859723] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["10.100.1.1"]}
[    5.862658] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "master-01", "domainname": "mimir-tech.org\u0000"}
[    5.865909] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "master-01", "domainname": "mimir-tech.org\u0000"}
[    5.869180] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["10.100.1.1", "10.100.50.100"]}
[    5.872414] [talos] task CreateSystemCgroups (1/2): starting
[    5.874760] [talos] task CreateSystemCgroups (1/2): done, 21.122629ms
[    5.876219] [talos] controller failed {"component": "controller-runtime", "controller": "v1alpha1.EventsSinkController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [fd28:1f64:7804:c703::1]:4002: connect: network is unreachable\""}
[    5.880378] [talos] controller failed {"component": "controller-runtime", "controller": "runtime.KmsgLogDeliveryController", "error": "error sending logs: dial tcp [fd28:1f64:7804:c703::1]:4001: connect: network is unreachable"}
[    5.883350] [talos] controller failed {"component": "controller-runtime", "controller": "siderolink.ManagerController", "error": "error accessing SideroLink API: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: lookup demo.talos.mimir-tech.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable\""}
[    5.888274] [talos] adjusting time (slew) by 17.270994ms via 10.100.1.1, state TIME_OK, status STA_NANO | STA_PLL {"component": "controller-runtime", "controller": "time.SyncController"}
[    5.891674] [talos] created route {"component": "controller-runtime", "controller": "network.RouteSpecController", "destination": "default", "gateway": "10.100.50.1", "table": "main", "link": "eth0"}
[    5.894610] [talos] phase etc (4/5): done, 23.589659ms
[    5.895988] [talos] phase config (5/5): 1 tasks(s)
[    5.897371] [talos] task loadConfig (1/1): starting
[    5.898760] [talos] task loadConfig (1/1): downloading config
[    5.902820] [talos] fetching machine config from: "http://demo.talos.mimir-tech.org:8081/configdata?uuid=b80c6841-fb72-4fa2-9fec-7bc8eb7faab3"
[    5.915172] [talos] task loadConfig (1/1): storing config in memory
[    5.917295] [talos] task loadConfig (1/1): done, 19.958371ms
[    5.920648] [talos] phase config (5/5): done, 24.659558ms
[    5.922065] [talos] initialize sequence: done: 168.018746ms
[    5.925797] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.932267] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.935912] [talos] removed address 10.100.50.111/24 from "eth0" {"component": "controller-runtime", "controller": "network.AddressSpecController"}
[    5.939252] [talos] install sequence: 13 phase(s)
[    5.940478] [talos] phase validateConfig (1/13): 1 tasks(s)
[    5.941754] [talos] task validateConfig (1/1): starting
[    5.943068] [talos] task validateConfig (1/1): done, 1.323204ms
[    5.944331] [talos] phase validateConfig (1/13): done, 3.854704ms
[    5.945611] [talos] phase env (2/13): 1 tasks(s)
[    5.946770] [talos] task setUserEnvVars (1/1): starting
[    5.947965] [talos] task setUserEnvVars (1/1): done, 1.200409ms
[    5.949182] [talos] phase env (2/13): done, 3.571557ms
[    5.950319] [talos] phase containerd (3/13): 1 tasks(s)
[    5.955893] [talos] task startContainerd (1/1): starting
[    5.957229] [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on 1.1.1.1:53: no such host"}
[    5.960129] [talos] service[containerd](Preparing): Running pre state
[    5.961350] [talos] service[containerd](Preparing): Creating service runner
[    7.171732] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["10.100.1.1"]}
[    7.181486] [talos] controller failed {"component": "controller-runtime", "controller": "runtime.KmsgLogDeliveryController", "error": "error sending logs: dial tcp [fd28:1f64:7804:c703::1]:4001: connect: network is unreachable"}
[    7.185237] [talos] service[containerd](Running): Process Process(["/bin/containerd" "--address" "/system/run/containerd/containerd.sock" "--state" "/system/run/containerd" "--root" "/system/var/lib/containerd"]) started with PID 654
[    7.188904] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "10.100.50.111/24", "link": "eth0"}
[    7.191950] [talos] created route {"component": "controller-runtime", "controller": "network.RouteSpecController", "destination": "default", "gateway": "10.100.50.1", "table": "main", "link": "eth0"}
[    7.195252] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "master-01", "domainname": "mimir-tech.org\u0000"}
[    7.198447] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["10.100.1.1", "10.100.50.100"]}
[    7.204793] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "master-01", "domainname": "mimir-tech.org\u0000"}
[    7.208389] [talos] controller failed {"component": "controller-runtime", "controller": "v1alpha1.EventsSinkController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [fd28:1f64:7804:c703::1]:4002: connect: network is unreachable\""}
[    7.234026] [talos] created new link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "kind": "wireguard"}
[    7.237769] [talos] reconfigured wireguard link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "peers": 1}
[    7.242720] [talos] changed MTU for the link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "mtu": 1280}
[    7.260338] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "fd28:1f64:7804:c703:f263:9a2f:c0d:d196/64", "link": "siderolink"}
[    7.264007] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[    8.145328] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[    8.180862] [talos] service[containerd](Running): Health check successful
[    8.182853] [talos] task startContainerd (1/1): done, 2.229152095s
[    8.184563] [talos] phase containerd (3/13): done, 2.236424117s
[    8.186178] [talos] phase install (4/13): 1 tasks(s)
[    8.187800] [talos] task install (1/1): starting
[    8.198421] [talos] pulling "ghcr.io/siderolabs/installer:v1.3.0"
[   10.562462] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   16.247532] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   25.951269] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   34.337524] 2023/01/25 19:19:10 running Talos installer v1.3.0
[   34.362758] 2023/01/25 19:19:10 creating new partition table on /dev/sda
[   34.364328] 2023/01/25 19:19:10 logical/physical block size: 512/512
[   34.365693] 2023/01/25 19:19:10 minimum/optimal I/O size: 512/512
[   34.371264] 2023/01/25 19:19:10 partitioning /dev/sda - EFI "105 MB"
[   34.372682] 2023/01/25 19:19:10 created /dev/sda1 (EFI) size 204800 blocks
[   34.374391] 2023/01/25 19:19:10 partitioning /dev/sda - BIOS "1.0 MB"
[   34.375675] 2023/01/25 19:19:10 created /dev/sda2 (BIOS) size 2048 blocks
[   34.377008] 2023/01/25 19:19:10 partitioning /dev/sda - BOOT "1.0 GB"
[   34.378329] 2023/01/25 19:19:10 created /dev/sda3 (BOOT) size 2048000 blocks
[   34.379705] 2023/01/25 19:19:10 partitioning /dev/sda - META "1.0 MB"
[   34.381027] 2023/01/25 19:19:10 created /dev/sda4 (META) size 2048 blocks
[   34.382328] 2023/01/25 19:19:10 partitioning /dev/sda - STATE "105 MB"
[   34.383632] 2023/01/25 19:19:10 created /dev/sda5 (STATE) size 204800 blocks
[   34.384937] 2023/01/25 19:19:10 partitioning /dev/sda - EPHEMERAL "0 B"
[   34.386305] 2023/01/25 19:19:10 created /dev/sda6 (EPHEMERAL) size 102391808 blocks
[   34.387947] 2023/01/25 19:19:10 formatting the partition "/dev/sda1" as "vfat" with label "EFI"
[   34.409363] 2023/01/25 19:19:10 zeroing out "/dev/sda2"
[   34.411123] 2023/01/25 19:19:10 formatting the partition "/dev/sda3" as "xfs" with label "BOOT"
[   34.447647] 2023/01/25 19:19:10 zeroing out "/dev/sda4"
[   34.449371] 2023/01/25 19:19:10 zeroing out "/dev/sda5"
[   34.578978] 2023/01/25 19:19:10 zeroing out "/dev/sda6"
[   34.584451] XFS (sda3): Mounting V5 Filesystem
[   34.590745] XFS (sda3): Ending clean mount
[   34.594582] 2023/01/25 19:19:10 copying /usr/install/amd64/vmlinuz to /boot/A/vmlinuz
[   34.611378] 2023/01/25 19:19:10 copying /usr/install/amd64/initramfs.xz to /boot/A/initramfs.xz
[   34.653269] 2023/01/25 19:19:10 writing /boot/grub/grub.cfg to disk
[   34.654972] 2023/01/25 19:19:10 executing: grub-install --boot-directory=/boot --efi-directory=/boot/EFI --removable /dev/sda
[   34.656961] Installing for x86_64-efi platform.
[   35.349106] Installation finished. No error reported.
[   35.356140] XFS (sda3): Unmounting Filesystem
[   35.365458] 2023/01/25 19:19:11 installation of v1.3.0 complete
[   35.394902] [talos] task install (1/1): install successful
[   35.396008] [talos] task install (1/1): done, 27.224097002s
[   35.397533] [talos] phase install (4/13): done, 27.227246578s
[   35.398741] [talos] phase saveStateEncryptionConfig (5/13): 1 tasks(s)
[   35.399841] [talos] task SaveStateEncryptionConfig (1/1): starting
[   35.401085] [talos] task SaveStateEncryptionConfig (1/1): done, 1.24414ms
[   35.402312] [talos] phase saveStateEncryptionConfig (5/13): done, 3.572011ms
[   35.403451] [talos] phase mountState (6/13): 1 tasks(s)
[   35.404654] [talos] task mountStatePartition (1/1): starting
[   35.415410] [talos] formatting the partition "/dev/sda5" as "xfs" with label "STATE"
[   35.475868] XFS (sda5): Mounting V5 Filesystem
[   35.483557] XFS (sda5): Ending clean mount
[   35.485531] [talos] task mountStatePartition (1/1): done, 80.892368ms
[   35.486745] [talos] phase mountState (6/13): done, 83.311905ms
[   35.488059] [talos] phase saveConfig (7/13): 1 tasks(s)
[   35.489032] [talos] node identity established {"component": "controller-runtime", "controller": "cluster.NodeIdentityController", "node_id": "ttUEXs6THks9VfbDveepb4ZnZ6rHiQhxuPylWoi7TseD"}
[   35.491497] [talos] task saveConfig (1/1): starting
[   35.492589] [talos] task saveConfig (1/1): done, 3.41974ms
[   35.493593] [talos] phase saveConfig (7/13): done, 5.534636ms
[   35.494626] [talos] phase unmountState (8/13): 1 tasks(s)
[   35.495642] [talos] task unmountStatePartition (1/1): starting
[   35.499796] XFS (sda5): Unmounting Filesystem
[   35.507751] [talos] task unmountStatePartition (1/1): done, 12.107781ms
[   35.508944] [talos] phase unmountState (8/13): done, 14.319932ms
[   35.509936] [talos] phase stopEverything (9/13): 1 tasks(s)
[   35.510939] [talos] task stopAllServices (1/1): starting
[   35.511975] [talos] service[containerd](Stopping): Sending SIGTERM to Process(["/bin/containerd" "--address" "/system/run/containerd/containerd.sock" "--state" "/system/run/containerd" "--root" "/system/var/lib/containerd"])
[   35.517064] [talos] service[containerd](Finished): Service finished successfully
[   35.518196] [talos] task stopAllServices (1/1): done, 7.261595ms
[   35.519155] [talos] phase stopEverything (9/13): done, 9.223769ms
[   35.520109] [talos] phase mountBoot (10/13): 1 tasks(s)
[   35.521125] [talos] task mountBootPartition (1/1): starting
[   35.527918] XFS (sda3): Mounting V5 Filesystem
[   35.567531] XFS (sda3): Ending clean mount
[   35.569404] [talos] task mountBootPartition (1/1): done, 48.286062ms
[   35.570417] [talos] phase mountBoot (10/13): done, 50.31866ms
[   35.571337] [talos] phase kexec (11/13): 1 tasks(s)
[   35.572205] [talos] task kexecPrepare (1/1): starting
[   36.219404] [talos] prepared kexec environment kernel="/boot/A/vmlinuz" initrd="/boot/A/initramfs.xz" cmdline="talos.platform=metal talos.config=http://demo.talos.mimir-tech.org:8081/configdata?uuid= console=ttyS0 console=tty0 init_on_alloc=1 slab_nomerge pti=on consoleblank=0 nvme_core.io_timeout=4294967295 printk.devkmsg=on ima_template=ima-ng ima_appraise=fix ima_hash=sha512 siderolink.api=demo.talos.mimir-tech.org:8081 talos.events.sink=[fd28:1f64:7804:c703::1]:4002 talos.logging.kernel=tcp://[fd28:1f64:7804:c703::1]:4001"
[   36.226002] [talos] task kexecPrepare (1/1): done, 653.93878ms
[   36.227136] [talos] phase kexec (11/13): done, 655.944589ms
[   36.228185] [talos] phase unmountBoot (12/13): 1 tasks(s)
[   36.230303] [talos] task unmountBootPartition (1/1): starting
[   36.238093] XFS (sda3): Unmounting Filesystem
[   36.246581] [talos] task unmountBootPartition (1/1): done, 16.283966ms
[   36.247657] [talos] phase unmountBoot (12/13): done, 19.476136ms
[   36.248811] [talos] phase reboot (13/13): 1 tasks(s)
[   36.249862] [talos] task reboot (1/1): starting
[   36.353987] [talos] controller runtime finished
[   36.356652] [talos] unmounted / (/dev/loop0)
[   36.357582] [talos] waiting for sync...
[   36.358551] [talos] sync done
[   36.359552] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[   36.410983] kexec_core: Starting new kernel
[    0.000000] Linux version 5.15.83-talos (@buildkitsandbox) (gcc (GCC) 12.2.0, GNU ld (GNU Binutils) 2.39) #1 SMP Wed Dec 14 16:48:31 UTC 2022
[    0.000000] Command line: talos.platform=metal talos.config=http://demo.talos.mimir-tech.org:8081/configdata?uuid= console=ttyS0 console=tty0 init_on_alloc=1 slab_nomerge pti=on consoleblank=0 nvme_core.io_timeout=4294967295 printk.devkmsg=on ima_template=ima-ng ima_appraise=fix ima_hash=sha512 siderolink.api=demo.talos.mimir-tech.org:8081 talos.events.sink=[fd28:1f64:7804:c703::1]:4002 talos.logging.kernel=tcp://[fd28:1f64:7804:c703::1]:4001
[    0.000000] x86/fpu: x87 FPU will use FXSAVE
[    0.000000] signal: max sigframe size: 1440
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000000805fff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000806000-0x0000000000807fff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000000808000-0x000000000080ffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000000900000-0x000000007e627017] usable
[    0.000000] BIOS-e820: [mem 0x000000007e627018-0x000000007e64e457] usable
[    0.000000] BIOS-e820: [mem 0x000000007e64e458-0x000000007e783017] usable
[    0.000000] BIOS-e820: [mem 0x000000007e783018-0x000000007e78ca57] usable
[    0.000000] BIOS-e820: [mem 0x000000007e78ca58-0x000000007eed7fff] usable
[    0.000000] BIOS-e820: [mem 0x000000007eed8000-0x000000007efd9fff] reserved
[    0.000000] BIOS-e820: [mem 0x000000007efda000-0x000000007f8edfff] usable
[    0.000000] BIOS-e820: [mem 0x000000007f8ee000-0x000000007fb6dfff] reserved
[    0.000000] BIOS-e820: [mem 0x000000007fb6e000-0x000000007fb7dfff] ACPI data
[    0.000000] BIOS-e820: [mem 0x000000007fb7e000-0x000000007fbfdfff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x000000007fbfe000-0x000000007feebfff] usable
[    0.000000] BIOS-e820: [mem 0x000000007feec000-0x000000007ff6ffff] reserved
[    0.000000] BIOS-e820: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] extended physical RAM map:
[    0.000000] reserve setup_data: [mem 0x0000000000000000-0x000000000009931f] usable
[    0.000000] reserve setup_data: [mem 0x0000000000099320-0x000000000009938f] usable
[    0.000000] reserve setup_data: [mem 0x0000000000099390-0x000000000009ffff] usable
[    0.000000] reserve setup_data: [mem 0x0000000000100000-0x0000000000805fff] usable
[    0.000000] reserve setup_data: [mem 0x0000000000806000-0x0000000000807fff] ACPI NVS
[    0.000000] reserve setup_data: [mem 0x0000000000808000-0x000000000080ffff] usable
[    0.000000] reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS
[    0.000000] reserve setup_data: [mem 0x0000000000900000-0x000000007e627017] usable
[    0.000000] reserve setup_data: [mem 0x000000007e627018-0x000000007e64e457] usable
[    0.000000] reserve setup_data: [mem 0x000000007e64e458-0x000000007e783017] usable
[    0.000000] reserve setup_data: [mem 0x000000007e783018-0x000000007e78ca57] usable
[    0.000000] reserve setup_data: [mem 0x000000007e78ca58-0x000000007eed7fff] usable
[    0.000000] reserve setup_data: [mem 0x000000007eed8000-0x000000007efd9fff] reserved
[    0.000000] reserve setup_data: [mem 0x000000007efda000-0x000000007f8edfff] usable
[    0.000000] reserve setup_data: [mem 0x000000007f8ee000-0x000000007fb6dfff] reserved
[    0.000000] reserve setup_data: [mem 0x000000007fb6e000-0x000000007fb7dfff] ACPI data
[    0.000000] reserve setup_data: [mem 0x000000007fb7e000-0x000000007fbfdfff] ACPI NVS
[    0.000000] reserve setup_data: [mem 0x000000007fbfe000-0x000000007feebfff] usable
[    0.000000] reserve setup_data: [mem 0x000000007feec000-0x000000007ff6ffff] reserved
[    0.000000] reserve setup_data: [mem 0x000000007ff70000-0x000000007fffffff] ACPI NVS
[    0.000000] reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved
[    0.000000] efi: EFI v2.70 by EDK II
[    0.000000] efi: SMBIOS=0x7f922000 ACPI=0x7fb7d000 ACPI 2.0=0x7fb7d014 MEMATTR=0x7e89d018
[    0.000000] SMBIOS 2.8 present.
[    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
[    0.000000] Hypervisor detected: KVM
[    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[    0.000000] kvm-clock: cpu 0, msr 7dec6001, primary cpu clock
[    0.000002] kvm-clock: using sched offset of 78014340284106 cycles
[    0.000009] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[    0.000020] tsc: Detected 1607.997 MHz processor
[    0.000107] last_pfn = 0x7feec max_arch_pfn = 0x400000000
[    0.000151] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WC  UC- UC
[    0.000162] x2apic: enabled by BIOS, switching to x2apic ops
[    0.014665] Secure boot disabled
[    0.014668] RAMDISK: [mem 0x76367000-0x797fffff]
[    0.014680] ACPI: Early table checksum verification disabled
[    0.014701] ACPI: RSDP 0x000000007FB7D014 000024 (v02 BOCHS )
[    0.014710] ACPI: XSDT 0x000000007FB7C0E8 000054 (v01 BOCHS  BXPC     00000001      01000013)
[    0.014725] ACPI: FACP 0x000000007FB78000 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[    0.014734] ACPI: DSDT 0x000000007FB79000 002C41 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[    0.014738] ACPI: FACS 0x000000007FBDB000 000040
[    0.014742] ACPI: APIC 0x000000007FB77000 000080 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[    0.014746] ACPI: SSDT 0x000000007FB76000 0000CA (v01 BOCHS  VMGENID  00000001 BXPC 00000001)
[    0.014751] ACPI: HPET 0x000000007FB75000 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[    0.014756] ACPI: WAET 0x000000007FB74000 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[    0.014761] ACPI: BGRT 0x000000007FB73000 000038 (v01 INTEL  EDK2     00000002      01000013)
[    0.014763] ACPI: Reserving FACP table memory at [mem 0x7fb78000-0x7fb78073]
[    0.014765] ACPI: Reserving DSDT table memory at [mem 0x7fb79000-0x7fb7bc40]
[    0.014766] ACPI: Reserving FACS table memory at [mem 0x7fbdb000-0x7fbdb03f]
[    0.014767] ACPI: Reserving APIC table memory at [mem 0x7fb77000-0x7fb7707f]
[    0.014768] ACPI: Reserving SSDT table memory at [mem 0x7fb76000-0x7fb760c9]
[    0.014769] ACPI: Reserving HPET table memory at [mem 0x7fb75000-0x7fb75037]
[    0.014770] ACPI: Reserving WAET table memory at [mem 0x7fb74000-0x7fb74027]
[    0.014771] ACPI: Reserving BGRT table memory at [mem 0x7fb73000-0x7fb73037]
[    0.014806] Setting APIC routing to cluster x2apic.
[    0.014935] No NUMA configuration found
[    0.014937] Faking a node at [mem 0x0000000000000000-0x000000007feebfff]
[    0.014944] NODE_DATA(0) allocated [mem 0x7fee8000-0x7feebfff]
[    0.014977] Zone ranges:
[    0.014981]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.014984]   DMA32    [mem 0x0000000001000000-0x000000007feebfff]
[    0.014987]   Normal   empty
[    0.014988] Movable zone start for each node
[    0.014989] Early memory node ranges
[    0.014990]   node   0: [mem 0x0000000000001000-0x000000000009ffff]
[    0.014993]   node   0: [mem 0x0000000000100000-0x0000000000805fff]
[    0.014994]   node   0: [mem 0x0000000000808000-0x000000000080ffff]
[    0.014995]   node   0: [mem 0x0000000000900000-0x000000007eed7fff]
[    0.014996]   node   0: [mem 0x000000007efda000-0x000000007f8edfff]
[    0.014996]   node   0: [mem 0x000000007fbfe000-0x000000007feebfff]
[    0.015001] Initmem setup node 0 [mem 0x0000000000001000-0x000000007feebfff]
[    0.015158] On node 0, zone DMA: 1 pages in unavailable ranges
[    0.015172] On node 0, zone DMA: 96 pages in unavailable ranges
[    0.015173] On node 0, zone DMA: 2 pages in unavailable ranges
[    0.015186] On node 0, zone DMA: 240 pages in unavailable ranges
[    0.021167] On node 0, zone DMA32: 258 pages in unavailable ranges
[    0.021184] On node 0, zone DMA32: 784 pages in unavailable ranges
[    0.021187] On node 0, zone DMA32: 276 pages in unavailable ranges
[    0.021642] ACPI: PM-Timer IO Port: 0xb008
[    0.021661] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[    0.021692] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[    0.021697] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.021699] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[    0.021704] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.021708] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[    0.021710] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[    0.021714] ACPI: Using ACPI (MADT) for SMP configuration information
[    0.021715] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.021726] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.021764] [mem 0x80000000-0xffbfffff] available for PCI devices
[    0.021767] Booting paravirtualized kernel on KVM
[    0.021773] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
[    0.032055] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
[    0.032210] percpu: Embedded 55 pages/cpu s185808 r8192 d31280 u1048576
[    0.032247] kvm-guest: stealtime: cpu 0, msr 7f61c100
[    0.032257] Built 1 zonelists, mobility grouping on.  Total pages: 514289
[    0.032259] Policy zone: DMA32
[    0.032261] Kernel command line: talos.platform=metal talos.config=http://demo.talos.mimir-tech.org:8081/configdata?uuid= console=ttyS0 console=tty0 init_on_alloc=1 slab_nomerge pti=on consoleblank=0 nvme_core.io_timeout=4294967295 printk.devkmsg=on ima_template=ima-ng ima_appraise=fix ima_hash=sha512 siderolink.api=demo.talos.mimir-tech.org:8081 talos.events.sink=[fd28:1f64:7804:c703::1]:4002 talos.logging.kernel=tcp://[fd28:1f64:7804:c703::1]:4001
[    0.032719] Unknown kernel command line parameters "pti=on", will be passed to user space.
[    0.032787] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[    0.032826] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    0.032875] mem auto-init: stack:byref_all(zero), heap alloc:on, heap free:off
[    0.037649] Memory: 1925632K/2090524K available (32793K kernel code, 4067K rwdata, 19684K rodata, 3212K init, 1400K bss, 164632K reserved, 0K cma-reserved)
[    0.037917] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.037934] Kernel/User page tables isolation: enabled
[    0.037963] ftrace: allocating 92043 entries in 360 pages
[    0.086719] ftrace: allocated 360 pages with 4 groups
[    0.086972] rcu: Hierarchical RCU implementation.
[    0.086976] rcu:     RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
[    0.086978]  Rude variant of Tasks RCU enabled.
[    0.086978]  Tracing variant of Tasks RCU enabled.
[    0.086979] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[    0.086981] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
[    0.091987] NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
[    0.092140] Console: colour dummy device 80x25
[    0.092314] printk: console [tty0] enabled
[    0.170292] printk: console [ttyS0] enabled
[    0.170674] ACPI: Core revision 20210730
[    0.171128] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
[    0.172005] APIC: Switch to symmetric I/O mode setup
[    0.172685] Switched APIC routing to physical x2apic.
[    0.173729] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.174200] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x172da757658, max_idle_ns: 440795229495 ns
[    0.175003] Calibrating delay loop (skipped) preset value.. 3215.99 BogoMIPS (lpj=6431988)
[    0.175613] pid_max: default: 32768 minimum: 301
[    0.179111] LSM: Security Framework initializing
[    0.179465] Yama: becoming mindful.
[    0.179787] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
[    0.180357] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
[    0.181381] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[    0.181788] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
[    0.182233] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    0.182888] Spectre V2 : Mitigation: Retpolines
[    0.183002] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    0.183605] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
[    0.184105] Speculative Store Bypass: Vulnerable
[    0.184447] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
[    0.184922] MMIO Stale Data: Unknown: No mitigations
[    0.208255] Freeing SMP alternatives memory: 76K
[    0.316769] smpboot: CPU0: Intel Common KVM processor (family: 0xf, model: 0x6, stepping: 0x1)
[    0.317743] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.318489] rcu: Hierarchical SRCU implementation.
[    0.318999] smp: Bringing up secondary CPUs ...
[    0.318999] x86: Booting SMP configuration:
[    0.318999] .... node  #0, CPUs:      #1
[    0.082892] kvm-clock: cpu 1, msr 7dec6041, secondary cpu clock
[    0.082892] smpboot: CPU 1 Converting physical 0 to logical die 1
[    0.331017] kvm-guest: stealtime: cpu 1, msr 7f71c100
[    0.335071] smp: Brought up 1 node, 2 CPUs
[    0.335395] smpboot: Max logical packages: 2
[    0.335710] smpboot: Total of 2 processors activated (6431.98 BogoMIPS)
[    0.336427] devtmpfs: initialized
[    0.336427] ACPI: PM: Registering ACPI NVS region [mem 0x00806000-0x00807fff] (8192 bytes)
[    0.336427] ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes)
[    0.336427] ACPI: PM: Registering ACPI NVS region [mem 0x7fb7e000-0x7fbfdfff] (524288 bytes)
[    0.339012] ACPI: PM: Registering ACPI NVS region [mem 0x7ff70000-0x7fffffff] (589824 bytes)
[    0.339702] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[    0.340458] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
[    0.341129] PM: RTC time: 19:19:15, date: 2023-01-25
[    0.341739] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.342398] audit: initializing netlink subsys (disabled)
[    0.343046] audit: type=2000 audit(1674674354.998:1): state=initialized audit_enabled=0 res=1
[    0.343108] thermal_sys: Registered thermal governor 'step_wise'
[    0.343673] thermal_sys: Registered thermal governor 'user_space'
[    0.344168] cpuidle: using governor menu
[    0.344168] ACPI: bus type PCI registered
[    0.344272] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    0.344841] dca service started, version 1.12.1
[    0.345240] PCI: Using configuration type 1 for base access
[    0.351013] Kprobes globally optimized
[    0.351342] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.351545] cryptd: max_cpu_qlen set to 1000
[    0.351545] ACPI: Added _OSI(Module Device)
[    0.351545] ACPI: Added _OSI(Processor Device)
[    0.351700] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.355001] ACPI: Added _OSI(Processor Aggregator Device)
[    0.355403] ACPI: Added _OSI(Linux-Dell-Video)
[    0.355734] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[    0.356128] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[    0.357322] ACPI: 2 ACPI AML tables successfully acquired and loaded
[    0.358190] ACPI: Interpreter enabled
[    0.358496] ACPI: PM: (supports S0 S3 S5)
[    0.358800] ACPI: Using IOAPIC for interrupt routing
[    0.359017] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.359777] ACPI: Enabled 3 GPEs in block 00 to 0F
[    0.362658] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.363054] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
[    0.363640] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[    0.364583] acpiphp: Slot [3] registered
[    0.364926] acpiphp: Slot [4] registered
[    0.365226] acpiphp: Slot [5] registered
[    0.365543] acpiphp: Slot [6] registered
[    0.365854] acpiphp: Slot [7] registered
[    0.366170] acpiphp: Slot [8] registered
[    0.366493] acpiphp: Slot [9] registered
[    0.367016] acpiphp: Slot [10] registered
[    0.367332] acpiphp: Slot [11] registered
[    0.367646] acpiphp: Slot [12] registered
[    0.367956] acpiphp: Slot [13] registered
[    0.368272] acpiphp: Slot [14] registered
[    0.368587] acpiphp: Slot [15] registered
[    0.368910] acpiphp: Slot [16] registered
[    0.369217] acpiphp: Slot [17] registered
[    0.369534] acpiphp: Slot [18] registered
[    0.369852] acpiphp: Slot [19] registered
[    0.370162] acpiphp: Slot [20] registered
[    0.370480] acpiphp: Slot [21] registered
[    0.370807] acpiphp: Slot [22] registered
[    0.371018] acpiphp: Slot [23] registered
[    0.371335] acpiphp: Slot [24] registered
[    0.371651] acpiphp: Slot [25] registered
[    0.371966] acpiphp: Slot [26] registered
[    0.372275] acpiphp: Slot [27] registered
[    0.372590] acpiphp: Slot [28] registered
[    0.372916] acpiphp: Slot [29] registered
[    0.373241] PCI host bridge to bus 0000:00
[    0.373552] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    0.374063] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    0.374570] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[    0.375001] pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
[    0.375551] pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window]
[    0.376113] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.376586] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[    0.377568] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[    0.378343] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
[    0.380322] pci 0000:00:01.1: reg 0x20: [io  0x10c0-0x10cf]
[    0.381262] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
[    0.381767] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
[    0.382369] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
[    0.383002] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
[    0.383609] pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
[    0.385329] pci 0000:00:01.2: reg 0x20: [io  0x1080-0x109f]
[    0.386355] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
[    0.387154] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
[    0.387693] pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
[    0.388383] pci 0000:00:02.0: [1234:1111] type 00 class 0x030000
[    0.391683] pci 0000:00:02.0: reg 0x10: [mem 0x80000000-0x80ffffff pref]
[    0.393474] pci 0000:00:02.0: reg 0x18: [mem 0x81442000-0x81442fff]
[    0.397054] pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref]
[    0.397725] pci 0000:00:02.0: BAR 0: assigned to efifb
[    0.398128] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[    0.399780] pci 0000:00:03.0: [1af4:1002] type 00 class 0x00ff00
[    0.400664] pci 0000:00:03.0: reg 0x10: [io  0x1000-0x103f]
[    0.403331] pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref]
[    0.404840] pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000
[    0.405836] pci 0000:00:05.0: reg 0x10: [io  0x1040-0x107f]
[    0.406738] pci 0000:00:05.0: reg 0x14: [mem 0x81441000-0x81441fff]
[    0.408473] pci 0000:00:05.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref]
[    0.410482] pci 0000:00:12.0: [1af4:1000] type 00 class 0x020000
[    0.411711] pci 0000:00:12.0: reg 0x10: [io  0x10a0-0x10bf]
[    0.412809] pci 0000:00:12.0: reg 0x14: [mem 0x81440000-0x81440fff]
[    0.415705] pci 0000:00:12.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref]
[    0.416766] pci 0000:00:12.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref]
[    0.417963] pci 0000:00:1e.0: [1b36:0001] type 01 class 0x060400
[    0.419001] pci 0000:00:1e.0: reg 0x10: [mem 0x80000d000-0x80000d0ff 64bit]
[    0.420724] pci 0000:00:1f.0: [1b36:0001] type 01 class 0x060400
[    0.421912] pci 0000:00:1f.0: reg 0x10: [mem 0x80000c000-0x80000c0ff 64bit]
[    0.424629] pci_bus 0000:01: extended config space not accessible
[    0.425205] acpiphp: Slot [0] registered
[    0.425527] acpiphp: Slot [1] registered
[    0.425845] acpiphp: Slot [2] registered
[    0.426174] acpiphp: Slot [3-1] registered
[    0.426502] acpiphp: Slot [4-1] registered
[    0.426830] acpiphp: Slot [5-1] registered
[    0.427016] acpiphp: Slot [6-1] registered
[    0.427344] acpiphp: Slot [7-1] registered
[    0.427661] acpiphp: Slot [8-1] registered
[    0.427987] acpiphp: Slot [9-1] registered
[    0.428303] acpiphp: Slot [10-1] registered
[    0.428630] acpiphp: Slot [11-1] registered
[    0.428973] acpiphp: Slot [12-1] registered
[    0.429304] acpiphp: Slot [13-1] registered
[    0.429625] acpiphp: Slot [14-1] registered
[    0.429952] acpiphp: Slot [15-1] registered
[    0.430278] acpiphp: Slot [16-1] registered
[    0.430611] acpiphp: Slot [17-1] registered
[    0.431017] acpiphp: Slot [18-1] registered
[    0.431348] acpiphp: Slot [19-1] registered
[    0.431675] acpiphp: Slot [20-1] registered
[    0.432004] acpiphp: Slot [21-1] registered
[    0.432330] acpiphp: Slot [22-1] registered
[    0.432689] acpiphp: Slot [23-1] registered
[    0.433024] acpiphp: Slot [24-1] registered
[    0.433374] acpiphp: Slot [25-1] registered
[    0.433706] acpiphp: Slot [26-1] registered
[    0.434037] acpiphp: Slot [27-1] registered
[    0.434371] acpiphp: Slot [28-1] registered
[    0.434711] acpiphp: Slot [29-1] registered
[    0.435032] acpiphp: Slot [30] registered
[    0.435403] acpiphp: Slot [31] registered
[    0.435833] pci 0000:00:1e.0: PCI bridge to [bus 01]
[    0.436211] pci 0000:00:1e.0:   bridge window [io  0xd000-0xdfff]
[    0.436685] pci 0000:00:1e.0:   bridge window [mem 0x81200000-0x813fffff]
[    0.437598] pci_bus 0000:02: extended config space not accessible
[    0.438084] acpiphp: Slot [0-1] registered
[    0.438415] acpiphp: Slot [1-1] registered
[    0.439018] acpiphp: Slot [2-1] registered
[    0.439343] acpiphp: Slot [3-2] registered
[    0.439686] acpiphp: Slot [4-2] registered
[    0.439999] acpiphp: Slot [5-2] registered
[    0.440323] acpiphp: Slot [6-2] registered
[    0.440657] acpiphp: Slot [7-2] registered
[    0.440992] acpiphp: Slot [8-2] registered
[    0.441309] acpiphp: Slot [9-2] registered
[    0.441640] acpiphp: Slot [10-2] registered
[    0.441968] acpiphp: Slot [11-2] registered
[    0.442299] acpiphp: Slot [12-2] registered
[    0.442630] acpiphp: Slot [13-2] registered
[    0.443018] acpiphp: Slot [14-2] registered
[    0.443342] acpiphp: Slot [15-2] registered
[    0.443665] acpiphp: Slot [16-2] registered
[    0.443984] acpiphp: Slot [17-2] registered
[    0.444313] acpiphp: Slot [18-2] registered
[    0.444642] acpiphp: Slot [19-2] registered
[    0.444972] acpiphp: Slot [20-2] registered
[    0.445318] acpiphp: Slot [21-2] registered
[    0.445655] acpiphp: Slot [22-2] registered
[    0.445981] acpiphp: Slot [23-2] registered
[    0.446307] acpiphp: Slot [24-2] registered
[    0.446647] acpiphp: Slot [25-2] registered
[    0.446977] acpiphp: Slot [26-2] registered
[    0.447017] acpiphp: Slot [27-2] registered
[    0.447354] acpiphp: Slot [28-2] registered
[    0.447680] acpiphp: Slot [29-2] registered
[    0.448001] acpiphp: Slot [30-1] registered
[    0.448343] acpiphp: Slot [31-1] registered
[    0.448796] pci 0000:00:1f.0: PCI bridge to [bus 02]
[    0.449177] pci 0000:00:1f.0:   bridge window [io  0xc000-0xcfff]
[    0.449637] pci 0000:00:1f.0:   bridge window [mem 0x81000000-0x811fffff]
[    0.451514] ACPI: PCI: Interrupt link LNKA configured for IRQ 10
[    0.452027] ACPI: PCI: Interrupt link LNKB configured for IRQ 11
[    0.452534] ACPI: PCI: Interrupt link LNKC configured for IRQ 10
[    0.453047] ACPI: PCI: Interrupt link LNKD configured for IRQ 11
[    0.453518] ACPI: PCI: Interrupt link LNKS configured for IRQ 9
[    0.454547] iommu: Default domain type: Translated
[    0.455002] iommu: DMA domain TLB invalidation policy: lazy mode
[    0.455505] pci 0000:00:02.0: vgaarb: setting as boot VGA device
[    0.455505] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[    0.456137] pci 0000:00:02.0: vgaarb: bridge control possible
[    0.456566] vgaarb: loaded
[    0.456966] SCSI subsystem initialized
[    0.457443] ACPI: bus type USB registered
[    0.457443] usbcore: registered new interface driver usbfs
[    0.457443] usbcore: registered new interface driver hub
[    0.459007] usbcore: registered new device driver usb
[    0.459411] pps_core: LinuxPPS API ver. 1 registered
[    0.459784] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[    0.460459] PTP clock support registered
[    0.460811] EDAC MC: Ver: 3.0.0
[    0.460811] Registered efivars operations
[    0.460811] NET: Registered PF_ATMPVC protocol family
[    0.460811] NET: Registered PF_ATMSVC protocol family
[    0.463026] NetLabel: Initializing
[    0.463285] NetLabel:  domain hash size = 128
[    0.463610] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
[    0.464045] NetLabel:  unlabeled traffic allowed by default
[    0.464478] PCI: Using ACPI for IRQ routing
[    0.464478] hpet: 3 channels of 0 reserved for per-cpu timers
[    0.464478] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[    0.464659] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
[    0.469073] clocksource: Switched to clocksource kvm-clock
[    0.613063] VFS: Disk quotas dquot_6.6.0
[    0.613541] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.614243] pnp: PnP ACPI init
[    0.614888] pnp: PnP ACPI: found 5 devices
[    0.627028] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[    0.627746] NET: Registered PF_INET protocol family
[    0.628194] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    0.629495] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
[    0.630175] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
[    0.630783] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
[    0.631429] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
[    0.631994] TCP: Hash tables configured (established 16384 bind 16384)
[    0.632532] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
[    0.633052] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
[    0.633645] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    0.634251] RPC: Registered named UNIX socket transport module.
[    0.634695] RPC: Registered udp transport module.
[    0.635052] RPC: Registered tcp transport module.
[    0.635400] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.636046] pci 0000:00:12.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window
[    0.636800] pci 0000:00:12.0: BAR 6: assigned [mem 0x81400000-0x8143ffff pref]
[    0.637342] pci 0000:00:1e.0: PCI bridge to [bus 01]
[    0.637719] pci 0000:00:1e.0:   bridge window [io  0xd000-0xdfff]
[    0.638726] pci 0000:00:1e.0:   bridge window [mem 0x81200000-0x813fffff]
[    0.640081] pci 0000:00:1f.0: PCI bridge to [bus 02]
[    0.640461] pci 0000:00:1f.0:   bridge window [io  0xc000-0xcfff]
[    0.641329] pci 0000:00:1f.0:   bridge window [mem 0x81000000-0x811fffff]
[    0.642678] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
[    0.643156] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
[    0.643613] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[    0.644130] pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
[    0.644629] pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window]
[    0.645148] pci_bus 0000:01: resource 0 [io  0xd000-0xdfff]
[    0.645564] pci_bus 0000:01: resource 1 [mem 0x81200000-0x813fffff]
[    0.646022] pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
[    0.646444] pci_bus 0000:02: resource 1 [mem 0x81000000-0x811fffff]
[    0.646964] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    0.647422] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[    0.659297] ACPI: \_SB_.LNKD: Enabled at IRQ 11
[    0.670856] pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 22418 usecs
[    0.671478] PCI: CLS 0 bytes, default 64
[    0.671914] Unpacking initramfs...
[    0.671992] kvm: no hardware support
[    0.672461] has_svm: not amd or hygon
[    0.672729] kvm: no hardware support
[    0.672998] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x172da757658, max_idle_ns: 440795229495 ns
[    0.679781] Initialise system trusted keyrings
[    0.680172] workingset: timestamp_bits=40 max_order=19 bucket_order=0
[    0.682012] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[    0.682649] NFS: Registering the id_resolver key type
[    0.683047] Key type id_resolver registered
[    0.683365] Key type id_legacy registered
[    0.683684] nfs4filelayout_init: NFSv4 File Layout Driver Registering...
[    0.684197] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering...
[    0.684928] Key type cifs.spnego registered
[    0.685253] Key type cifs.idmap registered
[    0.685567] fuse: init (API version 7.34)
[    0.685973] SGI XFS with ACLs, security attributes, quota, no debug enabled
[    0.686838] ceph: loaded (mds proto 32)
[    0.693001] NET: Registered PF_ALG protocol family
[    0.693376] Key type asymmetric registered
[    0.693686] Asymmetric key parser 'x509' registered
[    0.694093] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
[    0.694682] io scheduler mq-deadline registered
[    0.695036] io scheduler kyber registered
[    0.695586] IPMI message handler: version 39.2
[    0.695935] ipmi device interface
[    0.699045] ipmi_si: IPMI System Interface driver
[    0.699522] ipmi_si: Unable to find any System Interface(s)
[    0.699957] IPMI poweroff: Copyright (C) 2004 MontaVista Software - IPMI Powerdown via sys_reboot
[    0.700885] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[    0.701602] ACPI: button: Power Button [PWRF]
[    0.702372] ioatdma: Intel(R) QuickData Technology Driver 5.00
[    0.714529] ACPI: \_SB_.LNKC: Enabled at IRQ 10
[    0.726690] ACPI: \_SB_.LNKA: Enabled at IRQ 10
[    0.738922] ACPI: \_SB_.LNKB: Enabled at IRQ 11
[    0.740426] Free page reporting enabled
[    0.740985] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    0.741530] 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    0.742944] Non-volatile memory driver v1.3
[    0.743313] Linux agpgart interface v0.103
[    0.743955] [drm] amdgpu kernel modesetting enabled.
[    0.746254] loop: module loaded
[    0.746750] rbd: loaded (major 252)
[    0.747060] Guest personality initialized and is inactive
[    0.747558] VMCI host device registered (name=vmci, major=10, minor=126)
[    0.748051] Initialized host personality
[    0.748372] Loading iSCSI transport class v2.0-870.
[    0.749185] iscsi: registered transport (tcp)
[    0.749528] Adaptec aacraid driver 1.2.1[50983]-custom
[    0.749937] isci: Intel(R) C600 SAS Controller Driver - version 1.2.0
[    0.750447] Microchip SmartPQI Driver (v2.1.10-020)
[    0.750840] megasas: 07.717.02.00-rc1
[    0.751149] mpt3sas version 39.100.00.00 loaded
[    0.752757] scsi host0: Virtio SCSI HBA
[    0.753794] scsi 0:0:0:0: Direct-Access     QEMU     QEMU HARDDISK    2.5+ PQ: 0 ANSI: 5
[    0.764856] VMware PVSCSI driver - version 1.0.7.0-k
[    0.765291] hv_vmbus: registering driver hv_storvsc
[    0.765795] sd 0:0:0:0: Power-on or device reset occurred
[    0.766417] sd 0:0:0:0: [sda] 104857600 512-byte logical blocks: (53.7 GB/50.0 GiB)
[    0.767020] sd 0:0:0:0: [sda] Write Protect is off
[    0.767433] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    0.776596]  sda: sda1 sda2 sda3 sda4 sda5 sda6
[    0.777405] sd 0:0:0:0: [sda] Attached SCSI disk
[    0.778092] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    0.784376] scsi host1: ata_piix
[    0.784762] scsi host2: ata_piix
[    0.785032] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0x10c0 irq 14
[    0.785526] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0x10c8 irq 15
[    0.787219] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
[    0.787824] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
[    0.788848] tun: Universal TUN/TAP device driver, 1.6
[    0.792339] e100: Intel(R) PRO/100 Network Driver
[    0.792711] e100: Copyright(c) 1999-2006 Intel Corporation
[    0.793255] e1000: Intel(R) PRO/1000 Network Driver
[    0.793699] e1000: Copyright (c) 1999-2006 Intel Corporation.
[    0.794140] e1000e: Intel(R) PRO/1000 Network Driver
[    0.794569] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[    0.795149] igb: Intel(R) Gigabit Ethernet Network Driver
[    0.795661] igb: Copyright (c) 2007-2014 Intel Corporation.
[    0.796211] Intel(R) 2.5G Ethernet Linux Driver
[    0.796558] Copyright(c) 2018 Intel Corporation.
[    0.796933] igbvf: Intel(R) Gigabit Virtual Function Network Driver
[    0.797413] igbvf: Copyright (c) 2009 - 2012 Intel Corporation.
[    0.797861] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[    0.798379] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[    0.798971] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver
[    0.799589] ixgbevf: Copyright (c) 2009 - 2018 Intel Corporation.
[    0.800102] i40e: Intel(R) Ethernet Connection XL710 Network Driver
[    0.800575] i40e: Copyright (c) 2013 - 2019 Intel Corporation.
[    0.801083] ixgb: Intel(R) PRO/10GbE Network Driver
[    0.801463] ixgb: Copyright (c) 1999-2008 Intel Corporation.
[    0.801905] iavf: Intel(R) Ethernet Adaptive Virtual Function Network Driver
[    0.802438] Copyright (c) 2013 - 2018 Intel Corporation.
[    0.802896] ice: Intel(R) Ethernet Connection E800 Series Linux Driver
[    0.803404] ice: Copyright (c) 2018, Intel Corporation.
[    0.803863] sky2: driver version 1.30
[    0.804308] QLogic 1/10 GbE Converged/Intelligent Ethernet Driver v5.3.66
[    0.804843] QLogic/NetXen Network Driver v4.0.82
[    0.805197] QLogic FastLinQ 4xxxx Core Module qed
[    0.805555] qede init: QLogic FastLinQ 4xxxx Ethernet Driver qede
[    0.806048] VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI
[    0.806588] usbcore: registered new interface driver r8152
[    0.807016] hv_vmbus: registering driver hv_netvsc
[    0.807380] Fusion MPT base driver 3.04.20
[    0.807689] Copyright (c) 1999-2008 LSI Corporation
[    0.808070] Fusion MPT SAS Host driver 3.04.20
[    0.808431] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.808921] ehci-pci: EHCI PCI platform driver
[    0.809303] usbcore: registered new interface driver cdc_acm
[    0.809729] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
[    0.810340] usbcore: registered new interface driver usb-storage
[    0.810826] usbcore: registered new interface driver ch341
[    0.811299] usbserial: USB Serial support registered for ch341-uart
[    0.811935] usbcore: registered new interface driver cp210x
[    0.812499] usbserial: USB Serial support registered for cp210x
[    0.813001] usbcore: registered new interface driver ftdi_sio
[    0.813457] usbserial: USB Serial support registered for FTDI USB Serial Device
[    0.814044] usbcore: registered new interface driver pl2303
[    0.814481] usbserial: USB Serial support registered for pl2303
[    0.815077] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[    0.816210] serio: i8042 KBD port at 0x60,0x64 irq 1
[    0.816590] serio: i8042 AUX port at 0x60,0x64 irq 12
[    0.817074] hv_vmbus: registering driver hyperv_keyboard
[    0.817524] mousedev: PS/2 mouse device common for all mice
[    0.818134] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
[    0.819162] rtc_cmos 00:04: RTC can wake from S4
[    0.820118] rtc_cmos 00:04: registered as rtc0
[    0.820598] rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs
[    0.821475] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[    0.822687] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
[    0.823721] intel_pstate: CPU model not supported
[    0.824151] sdhci: Secure Digital Host Controller Interface driver
[    0.824608] sdhci: Copyright(c) Pierre Ossman
[    0.824969] sdhci-pltfm: SDHCI platform and OF driver helper
[    0.825648] efifb: probing for efifb
[    0.825970] efifb: framebuffer at 0x80000000, using 3072k, total 3072k
[    0.826519] efifb: mode is 1024x768x32, linelength=4096, pages=1
[    0.826964] efifb: scrolling: redraw
[    0.827249] efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
[    0.829779] Console: switching to colour frame buffer device 128x48
[    0.831346] fb0: EFI VGA frame buffer device
[    0.831692] EFI Variables Facility v0.08 2004-May-17
[    0.835250] hid: raw HID events driver (C) Jiri Kosina
[    0.835843] usbcore: registered new interface driver usbhid
[    0.836281] usbhid: USB HID core driver
[    0.836680] hv_utils: Registering HyperV Utility Driver
[    0.837097] hv_vmbus: registering driver hv_utils
[    0.837491] NET: Registered PF_LLC protocol family
[    0.837877] GACT probability NOT on
[    0.838150] Mirror/redirect action on
[    0.838462] Simple TC action Loaded
[    0.838853] netem: version 1.3
[    0.839136] u32 classifier
[    0.839354]     input device check on
[    0.839634]     Actions configured
[    0.840696] xt_time: kernel timezone is -0000
[    0.841057] IPVS: Registered protocols (TCP, UDP)
[    0.841455] IPVS: Connection hash table configured (size=4096, memory=32Kbytes)
[    0.842455] IPVS: ipvs loaded.
[    0.843119] IPVS: [rr] scheduler registered.
[    0.843863] IPVS: [wrr] scheduler registered.
[    0.844592] IPVS: [lc] scheduler registered.
[    0.845315] IPVS: [sh] scheduler registered.
[    0.846064] ipip: IPv4 and MPLS over IPv4 tunneling driver
[    0.847022] gre: GRE over IPv4 demultiplexor driver
[    0.847846] Initializing XFRM netlink socket
[    0.848672] NET: Registered PF_INET6 protocol family
[    0.855249] Segment Routing with IPv6
[    0.855938] In-situ OAM (IOAM) with IPv6
[    0.856702] mip6: Mobile IPv6
[    0.857356] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[    0.858415] NET: Registered PF_PACKET protocol family
[    0.859222] Bridge firewalling registered
[    0.859951] NET: Registered PF_APPLETALK protocol family
[    0.860715] NET: Registered PF_X25 protocol family
[    0.861435] X25: Linux Version 0.2
[    0.862071] RPC: Registered rdma transport module.
[    0.862797] RPC: Registered rdma backchannel transport module.
[    0.863634] l2tp_core: L2TP core driver, V2.0
[    0.864341] NET4: DECnet for Linux: V.2.5.68s (C) 1995-2003 Linux DECnet Project Team
[    0.865340] DECnet: Routing cache hash table of 1024 buckets, 16Kbytes
[    0.866204] NET: Registered PF_DECnet protocol family
[    0.866948] NET: Registered PF_PHONET protocol family
[    0.867715] 8021q: 802.1Q VLAN Support v1.8
[    0.871425] DCCP: Activated CCID 2 (TCP-like)
[    0.872146] DCCP: Activated CCID 3 (TCP-Friendly Rate Control)
[    0.873080] sctp: Hash tables configured (bind 256/256)
[    0.874002] NET: Registered PF_RDS protocol family
[    1.111895] Freeing initrd memory: 53860K
[    1.115709] NET: Registered PF_IEEE802154 protocol family
[    1.116601] Key type dns_resolver registered
[    1.117299] Key type ceph registered
[    1.117992] libceph: loaded (mon/osd proto 15/24)
[    1.118701] openvswitch: Open vSwitch switching datapath
[    1.119861] NET: Registered PF_VSOCK protocol family
[    1.120624] mpls_gso: MPLS GSO support
[    1.121575] IPI shorthand broadcast: enabled
[    1.122296] sched_clock: Marking stable (1040947950, 78892447)->(1165457939, -45617542)
[    1.123642] registered taskstats version 1
[    1.124357] Loading compiled-in X.509 certificates
[    1.125769] Loaded X.509 cert 'Sidero Labs, Inc.: Build time throw-away kernel key: fde47fcd1b30b3d7e614c0b28aa5ec27aa2443b8'
[    1.127151] ima: No TPM chip found, activating TPM-bypass!
[    1.127981] ima: Allocated hash algorithm: sha512
[    1.128766] ima: No architecture policies found
[    1.129771] PM:   Magic number: 7:136:345
[    1.130556] tty tty62: hash matches
[    1.131290] printk: console [netcon0] enabled
[    1.132032] netconsole: network logging started
[    1.132995] rdma_rxe: loaded
[    1.133685] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[    1.134979] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[    1.137960] Freeing unused kernel image (initmem) memory: 3212K
[    1.138893] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[    1.140133] cfg80211: failed to load regulatory.db
[    1.151027] Write protecting the kernel read-only data: 55296k
[    1.153073] Freeing unused kernel image (text/rodata gap) memory: 2020K
[    1.154188] Freeing unused kernel image (rodata/data gap) memory: 796K
[    1.157141] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    1.158080] x86/mm: Checking user space page tables
[    1.159054] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    1.159996] Run /init as init process
[    1.440007] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
[    3.151057] random: crng init done
[    3.155874] [talos] [initramfs] booting Talos v1.3.0
[    3.156882] [talos] [initramfs] mounting the rootfs
[    3.158072] loop0: detected capacity change from 0 to 95616
[    3.191620] [talos] [initramfs] bind mounting /lib/firmware
[    3.193300] [talos] [initramfs] entering the rootfs
[    3.194127] [talos] [initramfs] moving mounts to the new rootfs
[    3.195135] [talos] [initramfs] changing working directory into /root
[    3.196088] [talos] [initramfs] moving /root to /
[    3.196905] [talos] [initramfs] changing root directory
[    3.197769] [talos] [initramfs] cleaning up initramfs
[    3.199031] [talos] [initramfs] executing /sbin/init
[    5.703362] [talos] task setupLogger (1/1): done, 697.332µs
[    5.705102] [talos] phase logger (1/7): done, 2.974147ms
[    5.706532] [talos] phase systemRequirements (2/7): 7 tasks(s)
[    5.707669] [talos] task dropCapabilities (7/7): starting
[    5.709105] [talos] task dropCapabilities (7/7): done, 1.450346ms
[    5.710158] [talos] task enforceKSPPRequirements (1/7): starting
[    5.711189] [talos] task setupSystemDirectory (2/7): starting
[    5.712188] [talos] task mountBPFFS (3/7): starting
[    5.713101] [talos] task mountCgroups (4/7): starting
[    5.714037] [talos] task mountPseudoFilesystems (5/7): starting
[    5.715091] [talos] task setRLimit (6/7): starting
[    5.716139] [talos] task setupSystemDirectory (2/7): done, 7.477293ms
[    5.717492] [talos] task mountCgroups (4/7): done, 8.82567ms
[    5.719994] [talos] task mountPseudoFilesystems (5/7): done, 11.323591ms
[    5.721025] [talos] task setRLimit (6/7): done, 12.339562ms
[    5.723460] [talos] task mountBPFFS (3/7): done, 14.796094ms
[    5.739110] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.741113] [talos] task enforceKSPPRequirements (1/7): done, 32.476743ms
[    5.742404] [talos] phase systemRequirements (2/7): done, 35.896471ms
[    5.743762] [talos] phase integrity (3/7): 1 tasks(s)
[    5.744936] [talos] task writeIMAPolicy (1/1): starting
[    5.746052] audit: type=1807 audit(1674674360.398:2): action=dont_measure fsmagic=0x9fa0 res=1
[    5.747465] audit: type=1807 audit(1674674360.402:3): action=dont_measure fsmagic=0x62656572 res=1
[    5.748889] audit: type=1807 audit(1674674360.402:4): action=dont_measure fsmagic=0x64626720 res=1
[    5.750581] ima: policy update completed
[    5.751538] audit: type=1807 audit(1674674360.402:5): action=dont_measure fsmagic=0x1021994 res=1
[    5.752822] 8021q: adding VLAN 0 to HW filter on device eth0
[    5.753300] audit: type=1807 audit(1674674360.402:6): action=dont_measure fsmagic=0x1cd1 res=1
[    5.755631] audit: type=1807 audit(1674674360.402:7): action=dont_measure fsmagic=0x42494e4d res=1
[    5.757142] audit: type=1807 audit(1674674360.402:8): action=dont_measure fsmagic=0x73636673 res=1
[    5.758609] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.758625] audit: type=1807 audit(1674674360.402:9): action=dont_measure fsmagic=0xf97cff8c res=1
[    5.760612] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.762111] audit: type=1807 audit(1674674360.402:10): action=dont_measure fsmagic=0x43415d53 res=1
[    5.762113] audit: type=1807 audit(1674674360.402:11): action=dont_measure fsmagic=0x27e0eb res=1
[    5.766649] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.769155] [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable"}
[    5.772790] [talos] task writeIMAPolicy (1/1): done, 27.861331ms
[    5.774319] [talos] phase integrity (3/7): done, 30.567813ms
[    5.775779] [talos] phase etc (4/7): 2 tasks(s)
[    5.776895] [talos] task createOSReleaseFile (2/2): starting
[    5.778416] [talos] task createOSReleaseFile (2/2): done, 1.540478ms
[    5.779828] [talos] task CreateSystemCgroups (1/2): starting
[    5.782081] [talos] task CreateSystemCgroups (1/2): done, 5.054036ms
[    5.783970] [talos] phase etc (4/7): done, 8.19123ms
[    5.785096] [talos] phase mountSystem (5/7): 1 tasks(s)
[    5.786223] [talos] task mountStatePartition (1/1): starting
[    5.787846] [talos] controller failed {"component": "controller-runtime", "controller": "siderolink.ManagerController", "error": "error accessing SideroLink API: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: lookup demo.talos.mimir-tech.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable\""}
[    5.794430] [talos] controller failed {"component": "controller-runtime", "controller": "v1alpha1.EventsSinkController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [fd28:1f64:7804:c703::1]:4002: connect: network is unreachable\""}
[    5.798323] [talos] controller failed {"component": "controller-runtime", "controller": "runtime.KmsgLogDeliveryController", "error": "error sending logs: dial tcp [fd28:1f64:7804:c703::1]:4001: connect: network is unreachable"}
[    5.801208] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["10.100.1.1"]}
[    5.803990] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "master-01", "domainname": "mimir-tech.org\u0000"}
[    5.806798] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "10.100.50.111/24", "link": "eth0"}
[    5.809672] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["10.100.1.1", "10.100.50.100"]}
[    5.812742] [talos] created route {"component": "controller-runtime", "controller": "network.RouteSpecController", "destination": "default", "gateway": "10.100.50.1", "table": "main", "link": "eth0"}
[    5.815631] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "master-01", "domainname": "mimir-tech.org\u0000"}
[    5.816029] XFS (sda5): Mounting V5 Filesystem
[    5.818468] [talos] adjusting time (slew) by 123.530753ms via 10.100.1.1, state TIME_OK, status STA_NANO | STA_PLL {"component": "controller-runtime", "controller": "time.SyncController"}
[    5.829390] XFS (sda5): Ending clean mount
[    5.831621] [talos] task mountStatePartition (1/1): done, 45.421669ms
[    5.833210] [talos] phase mountSystem (5/7): done, 48.113592ms
[    5.834992] [talos] phase config (6/7): 1 tasks(s)
[    5.836216] [talos] node identity established {"component": "controller-runtime", "controller": "cluster.NodeIdentityController", "node_id": "ttUEXs6THks9VfbDveepb4ZnZ6rHiQhxuPylWoi7TseD"}
[    5.839158] [talos] task loadConfig (1/1): starting
[    5.841345] [talos] task loadConfig (1/1): persistence is enabled, using existing config on disk
[    5.842846] [talos] task loadConfig (1/1): done, 6.509805ms
[    5.849267] [talos] phase config (6/7): done, 14.275299ms
[    5.891464] [talos] phase unmountSystem (7/7): 1 tasks(s)
[    5.892695] [talos] task unmountStatePartition (1/1): starting
[    5.894128] XFS (sda5): Unmounting Filesystem
[    5.899169] [talos] task unmountStatePartition (1/1): done, 6.472208ms
[    5.900443] [talos] phase unmountSystem (7/7): done, 8.980791ms
[    5.901616] [talos] initialize sequence: done: 199.978781ms
[    5.902884] [talos] install sequence: 0 phase(s)
[    5.904019] [talos] install sequence: done: 1.134099ms
[    5.905242] [talos] boot sequence: 22 phase(s)
[    5.906326] [talos] phase saveStateEncryptionConfig (1/22): 1 tasks(s)
[    5.907581] [talos] task SaveStateEncryptionConfig (1/1): starting
[    5.908759] [talos] task SaveStateEncryptionConfig (1/1): done, 1.178744ms
[    5.910841] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]}
[    5.913355] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["pool.ntp.org"]}
[    5.916013] [talos] removed address 10.100.50.111/24 from "eth0" {"component": "controller-runtime", "controller": "network.AddressSpecController"}
[    5.918362] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "talos-2hy-vma", "domainname": ""}
[    5.921094] [talos] service[machined](Preparing): Running pre state
[    5.922289] [talos] service[apid](Waiting): Waiting for service "containerd" to be "up", api certificates
[    5.925879] [talos] phase saveStateEncryptionConfig (1/22): done, 4.641811ms
[    5.927219] [talos] controller failed {"component": "controller-runtime", "controller": "network.AddressMergeController", "error": "1 conflict(s) detected"}
[    5.929674] [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["10.100.1.1", "10.100.50.100"]}
[    5.932152] [talos] controller failed {"component": "controller-runtime", "controller": "network.RouteSpecController", "error": "1 error occurred:\n\t* error adding route: netlink receive: network is unreachable, message {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:0 Protocol:3 Scope:0 Type:1 Flags:0 Attributes:{Dst:<nil> Src:10.100.50.111 Gateway:10.100.50.1 OutIface:4 Priority:1024 Table:254 Mark:0 Pref:<nil> Expires:<nil> Metrics:<nil> Multipath:[]}}\n\n"}
[    5.938071] [talos] setting time servers {"component": "controller-runtime", "controller": "network.TimeServerSpecController", "addresses": ["10.100.1.1"]}
[    5.940593] [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable"}
[    5.943648] [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "master-01", "domainname": "mimir-tech.org\u0000"}
[    5.946419] [talos] service[machined](Preparing): Creating service runner
[    5.947718] [talos] phase mountState (2/22): 1 tasks(s)
[    5.948866] [talos] ntp query error with server "10.100.1.1" {"component": "controller-runtime", "controller": "time.SyncController", "error": "dial udp 10.100.1.1:123: connect: network is unreachable"}
[    5.951774] [talos] service[machined](Running): Service started as goroutine
[    5.953095] [talos] task mountStatePartition (1/1): starting
[    5.962405] XFS (sda5): Mounting V5 Filesystem
[    5.969134] XFS (sda5): Ending clean mount
[    5.971259] [talos] task mountStatePartition (1/1): done, 22.359047ms
[    5.972603] [talos] phase mountState (2/22): done, 45.374786ms
[    5.973931] [talos] phase validateConfig (3/22): 1 tasks(s)
[    5.975196] [talos] task validateConfig (1/1): starting
[    5.976458] [talos] task validateConfig (1/1): done, 1.260214ms
[    5.977696] [talos] phase validateConfig (3/22): done, 3.765732ms
[    5.978987] [talos] phase saveConfig (4/22): 1 tasks(s)
[    5.980216] [talos] task saveConfig (1/1): starting
[    5.981887] [talos] task saveConfig (1/1): done, 1.669131ms
[    5.983155] [talos] phase saveConfig (4/22): done, 4.167282ms
[    5.984354] [talos] phase memorySizeCheck (5/22): 1 tasks(s)
[    5.985529] [talos] task memorySizeCheck (1/1): starting
[    5.986761] [talos] NOTE: recommended memory size is 3946 MiB
[    5.987899] [talos] NOTE: current total memory size is 1939 MiB
[    5.989066] [talos] task memorySizeCheck (1/1): done, 3.53582ms
[    5.990225] [talos] phase memorySizeCheck (5/22): done, 5.872082ms
[    5.991393] [talos] phase diskSizeCheck (6/22): 1 tasks(s)
[    5.992488] [talos] task diskSizeCheck (1/1): starting
[    5.993543] [talos] disk size is OK
[    5.994453] [talos] disk size is 51200 MiB
[    5.995418] [talos] task diskSizeCheck (1/1): done, 2.929596ms
[    5.996499] [talos] phase diskSizeCheck (6/22): done, 5.106189ms
[    5.997552] [talos] phase env (7/22): 1 tasks(s)
[    5.998499] [talos] task setUserEnvVars (1/1): starting
[    5.999499] [talos] task setUserEnvVars (1/1): done, 1.000558ms
[    6.000518] [talos] phase env (7/22): done, 2.966559ms
[    6.001461] [talos] phase containerd (8/22): 1 tasks(s)
[    6.002402] [talos] task startContainerd (1/1): starting
[    6.003359] [talos] service[containerd](Preparing): Running pre state
[    6.004354] [talos] service[containerd](Preparing): Creating service runner
[    6.476354] [talos] hello failed {"component": "controller-runtime", "controller": "cluster.DiscoveryServiceController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: lookup discovery.talos.dev on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable\"", "endpoint": "discovery.talos.dev:443"}
[    6.590768] [talos] controller failed {"component": "controller-runtime", "controller": "network.AddressMergeController", "error": "1 conflict(s) detected"}
[    6.674640] [talos] controller failed {"component": "controller-runtime", "controller": "network.RouteSpecController", "error": "1 error occurred:\n\t* error adding route: netlink receive: network is unreachable, message {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:0 Protocol:3 Scope:0 Type:1 Flags:0 Attributes:{Dst:<nil> Src:10.100.50.111 Gateway:10.100.50.1 OutIface:4 Priority:1024 Table:254 Mark:0 Pref:<nil> Expires:<nil> Metrics:<nil> Multipath:[]}}\n\n"}
[    6.923085] [talos] service[apid](Waiting): Waiting for service "containerd" to be "up"
[    6.943960] [talos] service[machined](Running): Health check successful
[    6.947612] [talos] ntp query error with server "10.100.1.1" {"component": "controller-runtime", "controller": "time.SyncController", "error": "dial udp 10.100.1.1:123: connect: network is unreachable"}
[    7.179565] [talos] service[containerd](Running): Process Process(["/bin/containerd" "--address" "/system/run/containerd/containerd.sock" "--state" "/system/run/containerd" "--root" "/system/var/lib/containerd"]) started with PID 678
[    7.248943] [talos] service[containerd](Running): Health check successful
[    7.250307] [talos] service[apid](Preparing): Running pre state
[    7.251452] [talos] task startContainerd (1/1): done, 1.254956473s
[    7.252551] [talos] phase containerd (8/22): done, 1.258080952s
[    7.253587] [talos] service[apid](Preparing): Creating service runner
[    7.254693] [talos] phase dbus (9/22): 1 tasks(s)
[    7.255728] [talos] task startDBus (1/1): starting
[    7.260883] [talos] task startDBus (1/1): done, 5.192824ms
[    7.262101] [talos] phase dbus (9/22): done, 8.570685ms
[    7.263147] [talos] phase ephemeral (10/22): 1 tasks(s)
[    7.264235] [talos] task mountEphemeralPartition (1/1): starting
[    7.273921] [talos] formatting the partition "/dev/sda6" as "xfs" with label "EPHEMERAL"
[    7.280877] [talos] controller failed {"component": "controller-runtime", "controller": "network.RouteSpecController", "error": "1 error occurred:\n\t* error adding route: netlink receive: network is unreachable, message {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:0 Protocol:3 Scope:0 Type:1 Flags:0 Attributes:{Dst:<nil> Src:10.100.50.111 Gateway:10.100.50.1 OutIface:4 Priority:1024 Table:254 Mark:0 Pref:<nil> Expires:<nil> Metrics:<nil> Multipath:[]}}\n\n"}
[    7.343610] XFS (sda6): Mounting V5 Filesystem
[    7.352274] XFS (sda6): Ending clean mount
[    7.377599] [talos] task mountEphemeralPartition (1/1): done, 114.214523ms
[    7.378840] [talos] phase ephemeral (10/22): done, 116.566059ms
[    7.380436] [talos] phase var (11/22): 1 tasks(s)
[    7.381486] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletServiceController", "error": "error writing kubelet PKI: open /etc/kubernetes/bootstrap-kubeconfig: read-only file system"}
[    7.384453] [talos] task setupVarDirectory (1/1): starting
[    7.388149] [talos] task setupVarDirectory (1/1): done, 3.795461ms
[    7.389377] [talos] phase var (11/22): done, 9.006233ms
[    7.390601] [talos] phase overlay (12/22): 1 tasks(s)
[    7.391912] [talos] task mountOverlayFilesystems (1/1): starting
[    7.393985] [talos] task mountOverlayFilesystems (1/1): done, 2.088087ms
[    7.395258] [talos] phase overlay (12/22): done, 4.692538ms
[    7.396375] [talos] phase legacyCleanup (13/22): 1 tasks(s)
[    7.397452] [talos] task cleanupLegacyStaticPodFiles (1/1): starting
[    7.398796] [talos] task cleanupLegacyStaticPodFiles (1/1): done, 1.353771ms
[    7.400084] [talos] phase legacyCleanup (13/22): done, 3.735603ms
[    7.401180] [talos] phase udevSetup (14/22): 1 tasks(s)
[    7.402325] [talos] task writeUdevRules (1/1): starting
[    7.403485] [talos] task writeUdevRules (1/1): done, 1.168336ms
[    7.404576] [talos] phase udevSetup (14/22): done, 3.421549ms
[    7.405607] [talos] phase udevd (15/22): 1 tasks(s)
[    7.406571] [talos] task startUdevd (1/1): starting
[    7.407698] [talos] service[udevd](Preparing): Running pre state
[    7.433759] [talos] service[udevd](Preparing): Creating service runner
[    7.443876] udevd[701]: starting version 3.2.11
[    7.444967] [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 701
[    7.453299] udevd[701]: starting eudev-3.2.11
[    7.547745] [talos] hello failed {"component": "controller-runtime", "controller": "cluster.DiscoveryServiceController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: lookup discovery.talos.dev on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable\"", "endpoint": "discovery.talos.dev:443"}
[    7.615117] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "10.100.50.111/24", "link": "eth0"}
[    7.617685] [talos] controller failed {"component": "controller-runtime", "controller": "runtime.KmsgLogDeliveryController", "error": "error sending logs: dial tcp [fd28:1f64:7804:c703::1]:4001: connect: network is unreachable"}
[    7.620605] [talos] controller failed {"component": "controller-runtime", "controller": "siderolink.ManagerController", "error": "error accessing SideroLink API: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: lookup demo.talos.mimir-tech.org on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable\""}
[    7.637610] [talos] controller failed {"component": "controller-runtime", "controller": "v1alpha1.EventsSinkController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [fd28:1f64:7804:c703::1]:4002: connect: network is unreachable\""}
[    7.685095] [talos] service[udevd](Running): Health check successful
[    7.686320] [talos] task startUdevd (1/1): done, 281.77333ms
[    7.687499] [talos] phase udevd (15/22): done, 283.929357ms
[    7.688618] [talos] phase userDisks (16/22): 1 tasks(s)
[    7.689729] [talos] task mountUserDisks (1/1): starting
[    7.690829] [talos] task mountUserDisks (1/1): done, 1.108316ms
[    7.692104] [talos] phase userDisks (16/22): done, 3.511903ms
[    7.693365] [talos] phase userSetup (17/22): 1 tasks(s)
[    7.694443] [talos] task writeUserFiles (1/1): starting
[    7.695535] [talos] task writeUserFiles (1/1): done, 1.09981ms
[    7.696661] [talos] phase userSetup (17/22): done, 3.319667ms
[    7.697757] [talos] phase lvm (18/22): 1 tasks(s)
[    7.698780] [talos] task activateLogicalVolumes (1/1): starting
[    7.832396] [talos] task activateLogicalVolumes (1/1): done, 134.573045ms
[    7.833721] [talos] phase lvm (18/22): done, 136.946987ms
[    7.834801] [talos] phase startEverything (19/22): 1 tasks(s)
[    7.835969] [talos] task startAllServices (1/1): starting
[    7.837107] [talos] task startAllServices (1/1): waiting for 8 services
[    7.838437] [talos] service[cri](Waiting): Waiting for network
[    7.839652] [talos] service[trustd](Waiting): Waiting for service "containerd" to be "up", time sync, network
[    7.841126] [talos] service[etcd](Waiting): Waiting for service "cri" to be "up", time sync, network, etcd spec
[    7.842655] [talos] task startAllServices (1/1): service "apid" to be "up", service "containerd" to be "up", service "cri" to be "up", service "etcd" to be "up", service "kubelet" to be "up", service "machined" to be "up", service "trustd" to be "up", service "udevd" to be "up"
[    7.846804] [talos] service[cri](Preparing): Running pre state
[    7.848146] [talos] service[cri](Preparing): Creating service runner
[    7.849386] [talos] service[trustd](Preparing): Running pre state
[    7.850729] [talos] service[trustd](Preparing): Creating service runner
[    7.858720] [talos] service[cri](Running): Process Process(["/bin/containerd" "--address" "/run/containerd/containerd.sock" "--config" "/etc/cri/containerd.toml"]) started with PID 1575
[    7.943607] [talos] ntp query error with server "10.100.1.1" {"component": "controller-runtime", "controller": "time.SyncController", "error": "dial udp 10.100.1.1:123: connect: network is unreachable"}
[    8.013829] [talos] service[kubelet](Waiting): Waiting for service "cri" to be "up", time sync, network
[    8.201829] [talos] controller failed {"component": "controller-runtime", "controller": "v1alpha1.EventsSinkController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp [fd28:1f64:7804:c703::1]:4002: connect: network is unreachable\""}
[    8.208373] [talos] controller failed {"component": "controller-runtime", "controller": "runtime.KmsgLogDeliveryController", "error": "error sending logs: dial tcp [fd28:1f64:7804:c703::1]:4001: connect: network is unreachable"}
[    8.306764] [talos] hello failed {"component": "controller-runtime", "controller": "cluster.DiscoveryServiceController", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp: lookup discovery.talos.dev on 8.8.8.8:53: dial udp 8.8.8.8:53: connect: network is unreachable\"", "endpoint": "discovery.talos.dev:443"}
[    8.383531] [talos] created route {"component": "controller-runtime", "controller": "network.RouteSpecController", "destination": "default", "gateway": "10.100.50.1", "table": "main", "link": "eth0"}
[    8.421087] [talos] service[apid](Running): Started task apid (PID 1620) for container apid
[    8.428258] [talos] service[trustd](Running): Started task trustd (PID 1619) for container trustd
[    8.548718] [talos] created new link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "kind": "wireguard"}
[    8.558609] [talos] reconfigured wireguard link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "peers": 1}
[    8.561686] [talos] changed MTU for the link {"component": "controller-runtime", "controller": "network.LinkSpecController", "link": "siderolink", "mtu": 1280}
[    8.565863] [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "fd28:1f64:7804:c703:f263:9a2f:c0d:d196/64", "link": "siderolink"}
[    8.835815] [talos] service[etcd](Waiting): Waiting for service "cri" to be "up", etcd spec
[    8.847251] [talos] service[cri](Running): Health check successful
[    8.848587] [talos] service[kubelet](Preparing): Running pre state
[    8.850303] [talos] service[trustd](Running): Health check successful
[    9.829206] [talos] service[etcd](Waiting): Waiting for etcd spec
[   10.856972] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp: lookup cp.talos.mimir-tech.org on 8.8.8.8:53: no such host"}
[   10.863066] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp: lookup cp.talos.mimir-tech.org on 8.8.8.8:53: no such host"}
[   11.144957] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   11.612126] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   12.097756] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   12.574427] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   13.226607] [talos] service[apid](Running): Health check successful
[   13.725530] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   13.978225] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   16.730493] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   18.743243] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   21.887382] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   22.770181] [talos] task startAllServices (1/1): service "etcd" to be "up", service "kubelet" to be "up"
[   27.782804] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   30.676982] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   32.043023] nf_conntrack: default automatic helper assignment has been turned off for security reasons and CT-based firewall rule not found. Use the iptables CT target to attach helpers instead.
[   37.741932] [talos] task startAllServices (1/1): service "etcd" to be "up", service "kubelet" to be "up"
[   40.189411] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   41.471028] [talos] service[kubelet](Preparing): Creating service runner
[   41.566816] [talos] service[kubelet](Running): Started task kubelet (PID 1704) for container kubelet
[   43.494862] [talos] service[kubelet](Running): Health check successful
[   45.118559] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   50.627832] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[   52.495788] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   52.734570] [talos] task startAllServices (1/1): service "etcd" to be "up"
[   62.967266] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   66.180187] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[   67.730723] [talos] task startAllServices (1/1): service "etcd" to be "up"
[   81.862341] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[   82.727725] [talos] task startAllServices (1/1): service "etcd" to be "up"
[   88.051749] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   96.135284] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[   97.725313] [talos] task startAllServices (1/1): service "etcd" to be "up"
[   97.845164] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[  112.723632] [talos] task startAllServices (1/1): service "etcd" to be "up"
[  114.513524] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[  127.723938] [talos] task startAllServices (1/1): service "etcd" to be "up"
[  128.739632] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[  132.003789] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[  142.722445] [talos] task startAllServices (1/1): service "etcd" to be "up"
[  148.592282] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[  152.581046] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[  157.721889] [talos] task startAllServices (1/1): service "etcd" to be "up"
[  170.494565] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[  171.566622] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[  172.720988] [talos] task startAllServices (1/1): service "etcd" to be "up"
[  187.720497] [talos] task startAllServices (1/1): service "etcd" to be "up"
[  199.003597] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[  202.718781] [talos] task startAllServices (1/1): service "etcd" to be "up"
smira commented 1 year ago

I'm confused here, the SideroLink seems to be up on Talos side from the logs. Do you see the logs coming to the sidero-controller-manager container as described in the docs?

japtain-cack commented 1 year ago

I deleted the metalmachine, machine, and rebooted the master-01 vm. Here are the logs from the sidero-controller-manager-bcc94d547-gfx2w pod.

2023/01/26 21:07:26 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:07:26 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:07:36 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:07:36 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:07:46 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:07:46 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:07:56 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:07:56 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:08:06 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:08:06 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:08:10 sending block 0: code=8, error: User aborted the transfer
2023/01/26 21:08:10 1047552 bytes sent
2023/01/26 21:08:10 open /var/lib/sidero/tftp/autoexec.ipxe: no such file or directory
2023/01/26 21:08:16 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:08:16 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:08:20 HTTP GET /ipxe?uuid=b80c6841-fb72-4fa2-9fec-7bc8eb7faab3&mac=ba-55-11-e2-87-28&domain=mimir-tech.org&hostname=master-01.talos.mimir-tech.org&serial=&arch=x86_64 10.100.50.111:19464
2023/01/26 21:08:20 Using "agent-amd64" environment for "b80c6841-fb72-4fa2-9fec-7bc8eb7faab3"
2023/01/26 21:08:20 HTTP GET /env/agent-amd64/vmlinuz 10.100.50.111:19464
2023/01/26 21:08:21 HTTP GET /env/agent-amd64/initramfs.xz 10.100.50.111:19464
2023/01/26 21:08:24 Server "b80c6841-fb72-4fa2-9fec-7bc8eb7faab3" needs wipe
1.6747673047996008e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/any"}
1.6747673047996233e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-workers"}
1.6747673047996006e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-masters"}
1.6747673049547021e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-workers"}
1.6747673049547796e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/any"}
1.674767304954702e+09   INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-masters"}
1.6747673049711816e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-masters"}
1.674767304971973e+09   INFO    controllers.ServerClass reconciling     {"serverclass": "/any"}
1.6747673049719737e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-workers"}
1.6747673049836452e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/any"}
1.6747673049836698e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-masters"}
1.674767304983673e+09   INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-workers"}
1.6747673057785325e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/any"}
1.6747673057786934e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-masters"}
1.6747673057791204e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-workers"}
1.6747673057872403e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/any"}
1.6747673057921765e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-masters"}
2023/01/26 21:08:26 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:08:26 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:08:36 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:08:36 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:08:45 sending block 0: code=8, error: User aborted the transfer
2023/01/26 21:08:46 1047552 bytes sent
2023/01/26 21:08:46 open /var/lib/sidero/tftp/autoexec.ipxe: no such file or directory
2023/01/26 21:08:46 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:08:46 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:08:56 HTTP GET /ipxe?uuid=b80c6841-fb72-4fa2-9fec-7bc8eb7faab3&mac=ba-55-11-e2-87-28&domain=mimir-tech.org&hostname=master-01.talos.mimir-tech.org&serial=&arch=x86_64 10.100.50.111:19464
2023/01/26 21:08:56 Using "default" environment for "b80c6841-fb72-4fa2-9fec-7bc8eb7faab3"
1.674767336224955e+09   INFO    controllers.ServerClass reconciling     {"serverclass": "/any"}
1.6747673362250016e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-workers"}
1.6747673362250128e+09  INFO    controllers.ServerClass reconciling     {"serverclass": "/talos-masters"}
2023/01/26 21:08:56 HTTP GET /env/default/vmlinuz 10.100.50.111:19464
2023/01/26 21:08:56 HTTP GET /env/default/initramfs.xz 10.100.50.111:19464
2023/01/26 21:08:56 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:08:56 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:09:06 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:09:06 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:09:09 received metadata request for uuid: b80c6841-fb72-4fa2-9fec-7bc8eb7faab3
2023/01/26 21:09:09 successfully returned metadata for "b80c6841-fb72-4fa2-9fec-7bc8eb7faab3"
2023/01/26 21:09:16 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:09:16 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:09:26 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:09:26 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:09:36 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:09:36 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:09:46 HTTP GET /boot.ipxe 127.0.0.1:59710
2023/01/26 21:09:46 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:09:56 HTTP GET /boot.ipxe 127.0.0.1:59704
2023/01/26 21:09:56 HTTP GET /boot.ipxe 127.0.0.1:59710
japtain-cack commented 1 year ago

The logs from the vm's serial output are the same as above. Continuing to loop through communication/authorization errors.

japtain-cack commented 1 year ago

Based on my understanding, the master-01 is supposed to come up and listen on port 6443 for kube api requests. I can ping master-01, but the port check to 6443 fails. I do get a successful connection to port 50000 though. So I believe this to be an issue with Talos, rather than network/dns/dhcp. It may be something I missed, but I don't believe the kube-api is coming up properly for some reaosn.

I've completely destroyed/redeployed Sidero, a few times now, just to make sure there wasn't any cruft from a previous deployment, but can't seem to get kubernetes on the new node to come up properly in each scenario.

From what I can tell though, the root of the issue, seems to be that port 6443, on master-01, seems to be inaccessible or down.

smira commented 1 year ago

I still don't get it - are the machine console logs coming back to Sidero like described in the docs?

As for the ports, it's expected.

Talos starts its own API on 50000, but kube-apiserver doesn't start, as etcd is not running yet. As it's a new cluster, it expects a bootstrap API call to start the first member. But as the Machine doesn't have an IP listed, the CACPPT won't be able to boostrap.

But looks like I see the issue.

Your node has the IP 10.100.50.111/24, which is within the Talos default for Kubernetes service CIDRs: https://www.talos.dev/v1.3/reference/configuration/#clusternetworkconfig

This IP is not announced, as Talos thinks it belongs to Kubernetes. So Machine gets no IP, and it doesn't get bootstrapped. Node addresses shouldn't intersect with pod/service subnets.

Fixing this is either:

japtain-cack commented 1 year ago

That makes perfect sense, I should have looked at that. I will rebuild and change the cidr ranges.

japtain-cack commented 1 year ago

Ok, so that got me a bit further, the next step was to get the kubeconfig for the new cluster. So I ran these commands

kubectl get talosconfig -n sidero-system -l cluster.x-k8s.io/cluster-name=talos -o yaml -o jsonpath='{.items[0].status.talosConfig}' > talos-talosconfig.yaml

talosctl --talosconfig talos-talosconfig.yaml kubeconfig --nodes 10.100.50.111

and that generated my config. I should note, I tried defining the nodes in the generated config, to cp.talos.mimir-tech.org, or master-01.talos.mimir-tech.org, and I was getting certificate SAN issues when running that talosctl command above. I was finally able to get the kubeconfig after using only the IP. I should also mention, that even though I have nodes defined in the talos-talosconfig.yaml, the talosctl cli still required me to define the --nodes arg.

The errors produced from the talosctl command mentioned my dns names didn't match any allowed san of master-01 or the IP 10.100.50.111.

Now I'm getting the following error when attempting to run a kubectl command:

kubectl --context=admin@talos cluster-info
E0127 21:51:12.915626  234994 memcache.go:238] couldn't get current server API group list: Get "https://cp.talos.mimir-tech.org:6443/api?timeout=32s": dial tcp 10.100.50.111:6443: connect: connection refused
E0127 21:51:12.921004  234994 memcache.go:238] couldn't get current server API group list: Get "https://cp.talos.mimir-tech.org:6443/api?timeout=32s": dial tcp 10.100.50.111:6443: connect: connection refused
E0127 21:51:12.922574  234994 memcache.go:238] couldn't get current server API group list: Get "https://cp.talos.mimir-tech.org:6443/api?timeout=32s": dial tcp 10.100.50.111:6443: connect: connection refused
E0127 21:51:12.923664  234994 memcache.go:238] couldn't get current server API group list: Get "https://cp.talos.mimir-tech.org:6443/api?timeout=32s": dial tcp 10.100.50.111:6443: connect: connection refused
E0127 21:51:12.924733  234994 memcache.go:238] couldn't get current server API group list: Get "https://cp.talos.mimir-tech.org:6443/api?timeout=32s": dial tcp 10.100.50.111:6443: connect: connection refused

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server cp.talos.mimir-tech.org:6443 was refused - did you specify the right host or port?

The master-01 vm is still generating connection errors:

[ 1988.744218] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[ 1994.219533] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[ 2002.765064] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2017.764274] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2024.669281] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[ 2032.763765] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2047.763787] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2049.707202] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[ 2050.318230] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[ 2062.764086] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2066.946863] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[ 2077.764151] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2092.763744] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2107.763385] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2118.177863] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[ 2119.758427] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[ 2122.763408] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2132.431565] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[ 2137.763537] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2152.762749] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2166.434198] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[ 2167.762820] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2182.762863] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2197.762726] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2212.762652] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2213.337579] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.NodeLabelsApplyController", "error": "1 error(s) occurred:\n\terror getting node: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/nodes/master-01?timeout=30s\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[ 2220.822761] [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\") has prevented the request from succeeding"}
[ 2222.434007] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[ 2227.762186] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2242.761474] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2257.761940] [talos] task startAllServices (1/1): service "etcd" to be "up"
[ 2270.569880] [talos] kubernetes endpoint watch error {"component": "controller-runtime", "controller": "k8s.EndpointController", "error": "failed to list *v1.Endpoints: Get \"https://cp.talos.mimir-tech.org:6443/api/v1/namespaces/default/endpoints?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0\": dial tcp 10.100.50.111:6443: connect: connection refused"}
[ 2272.761168] [talos] task startAllServices (1/1): service "etcd" to be "up"

I think this may be related to the certSANs, but I'm unsure how to set those properly. I tried setting them for the MachineConfig and ClusterConfig via the following method, but this seems to have no effect.

apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: TalosControlPlane
metadata:
  name: talos-cp
spec:
  controlPlaneConfig:
    controlplane:
      generateType: controlplane
      talosVersion: v1.3.2
      configPatches:
      - op: add
        path: /machine/network
        value:
          interfaces:
          - interface: eth0
            dhcp: true
            vip:
              ip: 10.100.50.52
      - op: add
        path: /machine/certSANs
        value:
          - cp.talos.mimir-tech.org
          - "*.talos.mimir-tech.org"
      - op: add
        path: /cluster/apiServer/certSANs
        value:
          - cp.talos.mimir-tech.org
          - "*.talos.mimir-tech.org"
japtain-cack commented 1 year ago

Where does the MachineConfigs.config.talos live in kubernetes? I can edit that config with the following command talosctl --talosconfig talos-talosconfig.yaml -n 10.100.50.111 edit mc --immediate, but I would like to add these configurations to my kustomize if possible.

I would also like to ask if it would be possible to provide a kustomize, or plain manifest, bootstrap documentation. Clusterctl is easy, but it's hard to get a good grasp of all the resources/manifests that actually make up the cluster and what all the configurations are. I believe there are more options available than exist in the documentation.

japtain-cack commented 1 year ago

I noticed a message in the serial output saying If this is the first node in the cluster, run talosctl bootstrap. I didn't see this in the Sidero docs, so I was assuming that this was performed by Sidero automatically. However, I ran this bootstrap command, talosctl --talosconfig talos-talosconfig.yaml bootstrap -n cp.talos.mimir-tech.org, and I can now connect to the cluster. I then cut DNS for cp.talos over to the VIP IP and that seems to be working properly.

So I have a few questions after working through the Sidero docs. Is the talosctl bootstrap command supposed to be needed during the Sidero bootstrap process? Also, I'm having trouble understanding which resources I need to update, or add a patch block, to configure the machine/cluster configs for the new cluster.

smira commented 1 year ago

Where does the MachineConfigs.config.talos live in kubernetes? I can edit that config with the following command talosctl --talosconfig talos-talosconfig.yaml -n 10.100.50.111 edit mc --immediate, but I would like to add these configurations to my kustomize if possible.

It doesn't live anywhere specifically, the generated machine config for each node will be stored as a bootstrap secret in the management cluster (CAPI specifics).

CABPT automatically generates machine configuration for each new machine (Sidero/CAPI will push the machine config only once, on the first boot).

Talos machine configuration can be customized with configuration patches: https://www.talos.dev/v1.3/talos-guides/configuration/patching/, Sidero/CAPI only supports JSON patches.

Sidero docs:

CABPT: https://github.com/siderolabs/cluster-api-bootstrap-provider-talos/#readme

smira commented 1 year ago

I noticed a message in the serial output saying If this is the first node in the cluster, run talosctl bootstrap. I didn't see this in the Sidero docs, so I was assuming that this was performed by Sidero automatically.

CACPPT bootstraps the cluster automatically, if it doesn't, there's an issue like I described the above with the IP being part of the Kubernetes pod/service CIDR. If things work correctly, you should see the machine IP address under Machine .status. If you don't see it, it should be fixed first.

smira commented 1 year ago

The errors with kubectl you described above are related to the fact that etcd was not up (no bootstrap), so no Kubernetes.

The SANs are managed automatically unless you're using an endpoint which is not in the machine config.

E.g. if you set the cluster control plane endpoint to https://name:6443/, name will be in the certificate SANs.

But if you are using another DNS name as well, it should be explicitly added to the machine config.

japtain-cack commented 1 year ago

I ended up getting this working. Here are my full notes on what I had to do.

  1. Create the docker based temp cluster

    talosctl cluster create \
    --kubernetes-version 1.26.2 \
    --talos-version v1.3.5 \
    --nameservers=10.100.1.1,10.100.50.100 \
    --name sidero-demo \
    -p 69:69/udp,8081:8081/tcp,51821:51821/udp \
    --workers 0 \
    --endpoint demo.talos.example.com \
    --skip-kubeconfig \
    --config-patch '[{"op": "add", "path": "/cluster/allowSchedulingOnMasters", "value": true}, {"op": "add", "path": "/cluster/network", "value": {"podSubnets": ["10.12.0.0/16"], "serviceSubnets": ["10.13.0.0/16"]}}]'
  2. Install dependencies and the sidero platform

    export SIDERO_CONTROLLER_MANAGER_HOST_NETWORK=true
    export SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=demo.talos.example.com
    export SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT=demo.talos.example.com
    export SIDERO_CONTROLLER_MANAGER_AUTO_BMC_SETUP=false
    
    clusterctl init -b talos -c talos -i sidero
  3. Get the sidero-demo kubeconfig, since we used --skip-kubeconfig during the cluster create above, which prevented your existing .kube/config from getting merged with the demo cluster's kubeconfig.

    talosctl kubeconfig ~/.kube/sidero-demo --nodes demo.talos.example.com
  4. Create control plane cluster, the talos cluster in this case.

    export CONTROL_PLANE_SERVERCLASS=talos-masters
    export WORKER_SERVERCLASS=talos-workers
    export TALOS_VERSION=v1.3.5
    export KUBERNETES_VERSION=v1.26.2
    export CONTROL_PLANE_PORT=6443
    export CONTROL_PLANE_ENDPOINT=cp.talos.example.com
    
    clusterctl generate cluster talos -i sidero > talos.yaml

    Save that with your srverClasses in your kustomization/ dir. Edit talos.yaml and ensure the pod/service ciders don't overlap with your base network. Add any certSANs as necessary.

    apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
    kind: TalosControlPlane
    metadata:
      name: talos-cp
      namespace: default
    spec:
      controlPlaneConfig:
        controlplane:
          generateType: controlplane
          talosVersion: v1.3.5
          configPatches:
          - op: add
            path: /machine/network
            value:
              interfaces:
              - interface: eth0
                dhcp: true
                vip:
                  ip: 10.100.50.52
          - op: add
            path: /machine/certSANs
            value:
              - "*.talos.example.com"
          - op: add
            path: /cluster/apiServer/certSANs
            value:
              - "*.talos.example.com"
      infrastructureTemplate:
        apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
        kind: MetalMachineTemplate
        name: talos-cp
      replicas: 1
      version: v1.26.2
    kubectl apply -k kustomize
  5. Accept the Servers to be added to the new control plane

    kubectl get servers
    NAME                                   HOSTNAME                         ACCEPTED   CORDONED   ALLOCATED   CLEAN   POWER   AGE
    b80c6841-fb72-4fa2-9fec-7bc8eb7faab3   master-01.talos.example.com   true                  true        false   on      45m
    kubectl edit server b80c6841-fb72-4fa2-9fec-7bc8eb7faab3

    Set accepted: false to accepted: true.

  6. Get the new Talos cluster's talosconfig. Kind of like a kubeconfig, but for talosctl.

    kubectl get talosconfig -n sidero-system -l cluster.x-k8s.io/cluster-name=talos -o yaml -o jsonpath='{.items[0].status.talosConfig}' > ~/.talos/talos
  7. bootstrap the Talos cluster. This is missing from the Sidero documentation. Your new control plane node will be stuck in an etcd error loop until you do this.

    talosctl --talosconfig ~/.talos/talos bootstrap -e master-01.talos.example.com -n master-01.talos.example.com
  8. Get the new Talos cluster's kubeconfig.

    talosctl --talosconfig ~/.talos/talos -e cp.talos.example.com -n master-01.talos.example.com kubeconfig ~/.kube/talos

    You should now be able to run kubectl against the new Talos cluster. kubectl cluster-info

  9. Pivot your sidero platform from the demo cluster to the new Talos cluster.

    export SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=sidero.talos.example.com
    export SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT=sidero.talos.example.com
    export SIDERO_CONTROLLER_MANAGER_AUTO_BMC_SETUP=false
    
    clusterctl init -b talos -c talos -i sidero

    Remove the taint from your control-plane node(s) for now, since there are no workers yet.

    kubectl taint node master-01 node-role.kubernetes.io/control-plane:NoSchedule-

    Execute the move. Perform a dry run first to validate before actually performing the move.

    clusterctl move -n sidero-system --kubeconfig ~/.kube/sidero-demo --to-kubeconfig ~/.kube/talos --dry-run -v 1
japtain-cack commented 1 year ago

The default cidrs conflicting with my base networks and the bootstrap for the first node is what I was missing. Logs appear to be clean of warnings/errors. Thanks for the help.

japtain-cack commented 1 year ago

One last question. I'm not sure what to expect once I pivot sidero to my management cluster. On the new management cluster, I don't see any Servers, TalosControlplanes, Machines, or ServerClases, it's all gone. How can I add nodes to this new management cluster without those?

japtain-cack commented 1 year ago

I rebuilt everything from scratch again, and this time the pivot worked apparently. I now see my current cluster and all it's related resources.