Closed Y0ngg4n closed 2 years ago
I have rebooted my whole talos cluster and now it is broken and i can´t even make talosctl health. The only thing i can make is talosctl dmesg
talosctl health
talosctl dmesg
This are the logs of my first server node:
10.1.1.2: kern: info: [2022-01-25T10:19:41.226131566Z]: netconsole: network logging started 10.1.1.2: kern: info: [2022-01-25T10:19:41.226625566Z]: rdma_rxe: loaded 10.1.1.2: kern: notice: [2022-01-25T10:19:41.226925566Z]: cfg80211: Loading compiled-in X.509 certificates for regulatory database 10.1.1.2: kern: notice: [2022-01-25T10:19:41.227856566Z]: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7' 10.1.1.2: kern: info: [2022-01-25T10:19:41.228262566Z]: ALSA device list: 10.1.1.2: kern: info: [2022-01-25T10:19:41.228481566Z]: No soundcards found. 10.1.1.2: kern: warning: [2022-01-25T10:19:41.228768566Z]: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 SUBSYSTEM=platform DEVICE=+platform:regulatory.0 10.1.1.2: kern: info: [2022-01-25T10:19:41.229295566Z]: cfg80211: failed to load regulatory.db 10.1.1.2: kern: debug: [2022-01-25T10:19:41.301831566Z]: ata2.01: NODEV after polling detection 10.1.1.2: kern: info: [2022-01-25T10:19:41.302034566Z]: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 10.1.1.2: kern: notice: [2022-01-25T10:19:41.303195566Z]: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 SUBSYSTEM=scsi DEVICE=+scsi:2:0:0:0 10.1.1.2: kern: info: [2022-01-25T10:19:41.304462566Z]: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray SUBSYSTEM=scsi DEVICE=+scsi:2:0:0:0 10.1.1.2: kern: info: [2022-01-25T10:19:41.304889566Z]: cdrom: Uniform CD-ROM driver Revision: 3.20 10.1.1.2: kern: debug: [2022-01-25T10:19:41.334030566Z]: sr 2:0:0:0: Attached scsi CD-ROM sr0 SUBSYSTEM=scsi DEVICE=+scsi:2:0:0:0 10.1.1.2: kern: notice: [2022-01-25T10:19:41.334185566Z]: sr 2:0:0:0: Attached scsi generic sg2 type 5 SUBSYSTEM=scsi DEVICE=+scsi:2:0:0:0 10.1.1.2: kern: info: [2022-01-25T10:19:41.334947566Z]: Freeing unused kernel image (initmem) memory: 2172K 10.1.1.2: kern: info: [2022-01-25T10:19:41.349539566Z]: Write protecting the kernel read-only data: 38912k 10.1.1.2: kern: info: [2022-01-25T10:19:41.350188566Z]: Freeing unused kernel image (text/rodata gap) memory: 2020K 10.1.1.2: kern: info: [2022-01-25T10:19:41.350755566Z]: Freeing unused kernel image (rodata/data gap) memory: 1344K 10.1.1.2: kern: info: [2022-01-25T10:19:41.352164566Z]: x86/mm: Checked W+X mappings: passed, no W+X pages found. 10.1.1.2: kern: info: [2022-01-25T10:19:41.352562566Z]: x86/mm: Checking user space page tables 10.1.1.2: kern: info: [2022-01-25T10:19:41.352955566Z]: x86/mm: Checked W+X mappings: passed, no W+X pages found. 10.1.1.2: kern: info: [2022-01-25T10:19:41.353339566Z]: Run /init as init process 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353597566Z]: with arguments: 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353597566Z]: /init 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353597566Z]: with environment: 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353598566Z]: HOME=/ 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353598566Z]: TERM=linux 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353598566Z]: BOOT_IMAGE=/boot/vmlinuz 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353599566Z]: pti=on 10.1.1.2: kern: info: [2022-01-25T10:19:41.785981566Z]: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 10.1.1.2: kern: notice: [2022-01-25T10:19:43.917518566Z]: random: crng init done 10.1.1.2: user: warning: [2022-01-25T10:19:43.919233566Z]: [talos] [initramfs] booting Talos v0.14.0 10.1.1.2: user: warning: [2022-01-25T10:19:43.919584566Z]: [talos] [initramfs] mounting the rootfs 10.1.1.2: kern: info: [2022-01-25T10:19:43.919955566Z]: loop0: detected capacity change from 0 to 100152 10.1.1.2: user: warning: [2022-01-25T10:19:43.949969566Z]: [talos] [initramfs] entering the rootfs 10.1.1.2: user: warning: [2022-01-25T10:19:43.950312566Z]: [talos] [initramfs] moving mounts to the new rootfs 10.1.1.2: user: warning: [2022-01-25T10:19:43.951313566Z]: [talos] [initramfs] changing working directory into /root 10.1.1.2: user: warning: [2022-01-25T10:19:43.951733566Z]: [talos] [initramfs] moving /root to / 10.1.1.2: user: warning: [2022-01-25T10:19:43.952063566Z]: [talos] [initramfs] changing root directory 10.1.1.2: user: warning: [2022-01-25T10:19:43.952403566Z]: [talos] [initramfs] cleaning up initramfs 10.1.1.2: user: warning: [2022-01-25T10:19:43.952871566Z]: [talos] [initramfs] executing /sbin/init 10.1.1.2: user: warning: [2022-01-25T10:19:48.343313566Z]: [talos] task setupLogger (1/1): done, 120.58\xc2\xb5s 10.1.1.2: user: warning: [2022-01-25T10:19:48.343809566Z]: [talos] phase logger (1/7): done, 646.91\xc2\xb5s 10.1.1.2: user: warning: [2022-01-25T10:19:48.344211566Z]: [talos] phase systemRequirements (2/7): 7 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:48.344732566Z]: [talos] task dropCapabilities (7/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.350342566Z]: [talos] task enforceKSPPRequirements (1/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.355636566Z]: [talos] task setupSystemDirectory (2/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.356049566Z]: [talos] task setupSystemDirectory (2/7): done, 5.542661ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.356445566Z]: [talos] task mountBPFFS (3/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.358832566Z]: [talos] task mountCgroups (4/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.359216566Z]: [talos] task mountCgroups (4/7): done, 8.426052ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.360881566Z]: [talos] task mountPseudoFilesystems (5/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.361271566Z]: [talos] task setRLimit (6/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.361594566Z]: [talos] task dropCapabilities (7/7): done, 10.889283ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.365324566Z]: [talos] task mountPseudoFilesystems (5/7): done, 14.529444ms 10.1.1.2: kern: info: [2022-01-25T10:19:48.371381566Z]: 8021q: adding VLAN 0 to HW filter on device eth0 10.1.1.2: user: warning: [2022-01-25T10:19:48.372087566Z]: [talos] task mountBPFFS (3/7): done, 15.219064ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.372459566Z]: [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]} 10.1.1.2: user: warning: [2022-01-25T10:19:48.373353566Z]: [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]} 10.1.1.2: user: warning: [2022-01-25T10:19:48.375520566Z]: [talos] task setRLimit (6/7): done, 24.718537ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.376105566Z]: [talos] task enforceKSPPRequirements (1/7): done, 18.746795ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.376751566Z]: [talos] phase systemRequirements (2/7): done, 32.538899ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.377316566Z]: [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on [::1]:53: read udp [::1]:33593->[::1]:53: read: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:19:48.380295566Z]: [talos] phase integrity (3/7): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:48.380821566Z]: [talos] task writeIMAPolicy (1/1): starting 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381380566Z]: audit: type=1807 audit(1643105988.741:2): action=dont_measure fsmagic=0x9fa0 res=1 10.1.1.2: kern: info: [2022-01-25T10:19:48.381479566Z]: ima: policy update completed 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381982566Z]: audit: type=1807 audit(1643105988.741:3): action=dont_measure fsmagic=0x62656572 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381983566Z]: audit: type=1807 audit(1643105988.741:4): action=dont_measure fsmagic=0x64626720 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381984566Z]: audit: type=1807 audit(1643105988.741:5): action=dont_measure fsmagic=0x1021994 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381984566Z]: audit: type=1807 audit(1643105988.741:6): action=dont_measure fsmagic=0x1cd1 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381985566Z]: audit: type=1807 audit(1643105988.741:7): action=dont_measure fsmagic=0x42494e4d res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381985566Z]: audit: type=1807 audit(1643105988.741:8): action=dont_measure fsmagic=0x73636673 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.385967566Z]: audit: type=1807 audit(1643105988.741:9): action=dont_measure fsmagic=0xf97cff8c res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.386529566Z]: audit: type=1807 audit(1643105988.741:10): action=dont_measure fsmagic=0x43415d53 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.387104566Z]: audit: type=1807 audit(1643105988.741:11): action=dont_measure fsmagic=0x27e0eb res=1 10.1.1.2: user: warning: [2022-01-25T10:19:48.389725566Z]: [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["10.1.0.2"]} 10.1.1.2: user: warning: [2022-01-25T10:19:48.390618566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "network.RouteSpecController", "error": "1 error occurred:\x5cn\x5ct* error adding route: netlink receive: network is unreachable, message {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:0 Protocol:3 Scope:0 Type:1 Flags:0 Attributes:{Dst: Src:10.1.1.2 Gateway:10.1.0.2 OutIface:4 Priority:1024 Table:254 Mark:0 Expires: Metrics: Multipath:[]}}\x5cn\x5cn"} 10.1.1.2: user: warning: [2022-01-25T10:19:48.393657566Z]: [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "talos-server-01", "domainname": "localdomain"} 10.1.1.2: user: warning: [2022-01-25T10:19:48.394990566Z]: [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "talos-server-01", "domainname": "localdomain"} 10.1.1.2: user: warning: [2022-01-25T10:19:48.396266566Z]: [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "10.1.1.2/16", "link": "eth0"} 10.1.1.2: user: warning: [2022-01-25T10:19:48.397522566Z]: [talos] task writeIMAPolicy (1/1): done, 16.713285ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.398011566Z]: [talos] phase integrity (3/7): done, 17.718755ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.398382566Z]: [talos] phase etc (4/7): 2 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:48.398713566Z]: [talos] task createOSReleaseFile (2/2): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.399079566Z]: [talos] task CreateSystemCgroups (1/2): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.399642566Z]: [talos] task createOSReleaseFile (2/2): done, 447.35\xc2\xb5s 10.1.1.2: user: warning: [2022-01-25T10:19:48.400047566Z]: [talos] task CreateSystemCgroups (1/2): done, 1.17619ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.400441566Z]: [talos] phase etc (4/7): done, 2.058951ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.400836566Z]: [talos] phase mountSystem (5/7): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:48.401210566Z]: [talos] task mountStatePartition (1/1): starting 10.1.1.2: kern: notice: [2022-01-25T10:19:48.535005566Z]: XFS (sda5): Mounting V5 Filesystem 10.1.1.2: user: warning: [2022-01-25T10:19:48.780302566Z]: [talos] created route {"component": "controller-runtime", "controller": "network.RouteSpecController", "destination": "default", "gateway": "10.1.0.2", "table": "main", "link": "eth0"} 10.1.1.2: user: warning: [2022-01-25T10:19:49.380786566Z]: [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on [::1]:53: read udp [::1]:35983->[::1]:53: read: connection refused"} 10.1.1.2: kern: notice: [2022-01-25T10:19:49.439936566Z]: XFS (sda5): Starting recovery (logdev: internal) 10.1.1.2: kern: notice: [2022-01-25T10:19:49.702449566Z]: XFS (sda5): Ending recovery (logdev: internal) 10.1.1.2: kern: warning: [2022-01-25T10:19:49.820829566Z]: xfs filesystem being mounted at /system/state supports timestamps until 2038 (0x7fffffff) 10.1.1.2: user: warning: [2022-01-25T10:19:49.821541566Z]: [talos] task mountStatePartition (1/1): done, 1.420328234s 10.1.1.2: user: warning: [2022-01-25T10:19:49.821985566Z]: [talos] phase mountSystem (5/7): done, 1.421149624s 10.1.1.2: user: warning: [2022-01-25T10:19:49.822347566Z]: [talos] phase config (6/7): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:49.822678566Z]: [talos] task loadConfig (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:49.891357566Z]: [talos] node identity established {"component": "controller-runtime", "controller": "cluster.NodeIdentityController", "node_id": "F8a3IfkpgzAalPsNeX1K7WEbN1CIMZm3kQprPFdoJfMB"} 10.1.1.2: user: warning: [2022-01-25T10:19:49.904900566Z]: [talos] task loadConfig (1/1): persistence is enabled, using existing config on disk 10.1.1.2: user: warning: [2022-01-25T10:19:49.921539566Z]: [talos] task loadConfig (1/1): done, 98.866337ms 10.1.1.2: user: warning: [2022-01-25T10:19:49.921992566Z]: [talos] phase config (6/7): done, 99.642948ms 10.1.1.2: user: warning: [2022-01-25T10:19:49.922403566Z]: [talos] phase unmountSystem (7/7): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:49.922819566Z]: [talos] task unmountStatePartition (1/1): starting 10.1.1.2: kern: notice: [2022-01-25T10:19:49.923372566Z]: XFS (sda5): Unmounting Filesystem 10.1.1.2: user: warning: [2022-01-25T10:19:50.308113566Z]: [talos] hello failed {"component": "controller-runtime", "controller": "cluster.DiscoveryServiceController", "error": "rpc error: code = Unavailable desc = connection error: desc = \x5c"transport: Error while dialing dial tcp: lookup discovery.talos.dev on [::1]:53: read udp [::1]:44232->[::1]:53: read: connection refused\x5c"", "endpoint": "discovery.talos.dev:443"} 10.1.1.2: user: warning: [2022-01-25T10:19:50.382329566Z]: [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on [::1]:53: read udp [::1]:58172->[::1]:53: read: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:19:50.555798566Z]: [talos] task unmountStatePartition (1/1): done, 632.975816ms 10.1.1.2: user: warning: [2022-01-25T10:19:50.556428566Z]: [talos] phase unmountSystem (7/7): done, 634.025626ms 10.1.1.2: user: warning: [2022-01-25T10:19:50.556886566Z]: [talos] initialize sequence: done: 2.213735565s 10.1.1.2: user: warning: [2022-01-25T10:19:50.557247566Z]: [talos] install sequence: 0 phase(s) 10.1.1.2: user: warning: [2022-01-25T10:19:50.557568566Z]: [talos] install sequence: done: 321.13\xc2\xb5s 10.1.1.2: user: warning: [2022-01-25T10:19:50.557914566Z]: [talos] boot sequence: 19 phase(s) 10.1.1.2: user: warning: [2022-01-25T10:19:50.558211566Z]: [talos] phase saveStateEncryptionConfig (1/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:50.558607566Z]: [talos] service[machined](Preparing): Running pre state 10.1.1.2: user: warning: [2022-01-25T10:19:50.559013566Z]: [talos] service[machined](Preparing): Creating service runner 10.1.1.2: user: warning: [2022-01-25T10:19:50.559445566Z]: [talos] service[machined](Running): Service started as goroutine 10.1.1.2: user: warning: [2022-01-25T10:19:50.559875566Z]: [talos] task SaveStateEncryptionConfig (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:50.560274566Z]: [talos] task SaveStateEncryptionConfig (1/1): done, 1.66384ms 10.1.1.2: user: warning: [2022-01-25T10:19:50.560707566Z]: [talos] phase saveStateEncryptionConfig (1/19): done, 2.495381ms 10.1.1.2: user: warning: [2022-01-25T10:19:50.561118566Z]: [talos] phase mountState (2/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:50.561458566Z]: [talos] task mountStatePartition (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:51.337066566Z]: [talos] hello failed {"component": "controller-runtime", "controller": "cluster.DiscoveryServiceController", "error": "rpc error: code = Unavailable desc = connection error: desc = \x5c"transport: Error while dialing dial tcp: lookup discovery.talos.dev on [::1]:53: read udp [::1]:34925->[::1]:53: read: connection refused\x5c"", "endpoint": "discovery.talos.dev:443"} 10.1.1.2: user: warning: [2022-01-25T10:19:51.383969566Z]: [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on [::1]:53: read udp [::1]:55329->[::1]:53: read: connection refused"} 10.1.1.2: kern: notice: [2022-01-25T10:19:52.216962566Z]: XFS (sda5): Mounting V5 Filesystem 10.1.1.2: user: warning: [2022-01-25T10:19:52.386075566Z]: [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on [::1]:53: read udp [::1]:57904->[::1]:53: read: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:19:52.695956566Z]: [talos] hello failed {"component": "controller-runtime", "controller": "cluster.DiscoveryServiceController", "error": "rpc error: code = Unavailable desc = connection error: desc = \x5c"transport: Error while dialing dial tcp: lookup discovery.talos.dev on [::1]:53: read udp [::1]:34925->[::1]:53: read: connection refused\x5c"", "endpoint": "discovery.talos.dev:443"} 10.1.1.2: kern: info: [2022-01-25T10:19:53.545146566Z]: XFS (sda5): Ending clean mount 10.1.1.2: kern: warning: [2022-01-25T10:19:53.560640566Z]: xfs filesystem being mounted at /system/state supports timestamps until 2038 (0x7fffffff) 10.1.1.2: user: warning: [2022-01-25T10:19:53.561320566Z]: [talos] task mountStatePartition (1/1): done, 2.999861713s 10.1.1.2: user: warning: [2022-01-25T10:19:53.561812566Z]: [talos] phase mountState (2/19): done, 3.000693373s 10.1.1.2: user: warning: [2022-01-25T10:19:53.562176566Z]: [talos] phase validateConfig (3/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:53.562541566Z]: [talos] task validateConfig (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:53.562905566Z]: [talos] task validateConfig (1/1): done, 376.69\xc2\xb5s 10.1.1.2: user: warning: [2022-01-25T10:19:53.563268566Z]: [talos] phase validateConfig (3/19): done, 1.093611ms 10.1.1.2: user: warning: [2022-01-25T10:19:53.563638566Z]: [talos] phase saveConfig (4/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:53.563974566Z]: [talos] task saveConfig (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:53.570887566Z]: [talos] task saveConfig (1/1): done, 6.909622ms 10.1.1.2: user: warning: [2022-01-25T10:19:53.571294566Z]: [talos] phase saveConfig (4/19): done, 7.654922ms 10.1.1.2: user: warning: [2022-01-25T10:19:53.571654566Z]: [talos] phase env (5/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:53.571975566Z]: [talos] task setUserEnvVars (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:53.572312566Z]: [talos] task setUserEnvVars (1/1): done, 346.45\xc2\xb5s 10.1.1.2: user: warning: [2022-01-25T10:19:53.572680566Z]: [talos] phase env (5/19): done, 1.02634ms 10.1.1.2: user: warning: [2022-01-25T10:19:53.572996566Z]: [talos] phase containerd (6/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:53.573331566Z]: [talos] task startContainerd (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:53.573700566Z]: [talos] service[containerd](Preparing): Running pre state 10.1.1.2: user: warning: [2022-01-25T10:19:53.574096566Z]: [talos] service[containerd](Preparing): Creating service runner 10.1.1.2: user: warning: [2022-01-25T10:19:53.632127566Z]: [talos] adjusting time (slew) by 22.96771ms via 162.159.200.1, state TIME_OK, status STA_PLL | STA_NANO {"component": "controller-runtime", "controller": "time.SyncController"} 10.1.1.2: user: warning: [2022-01-25T10:19:54.505931566Z]: [talos] service[containerd](Running): Process Process(["/bin/containerd" "--address" "/system/run/containerd/containerd.sock" "--state" "/system/run/containerd" "--root" "/system/var/lib/containerd"]) started with PID 664 10.1.1.2: user: warning: [2022-01-25T10:19:54.574536566Z]: [talos] service[containerd](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:19:54.575027566Z]: [talos] task startContainerd (1/1): done, 1.003044355s 10.1.1.2: user: warning: [2022-01-25T10:19:54.575415566Z]: [talos] phase containerd (6/19): done, 1.003764852s 10.1.1.2: user: warning: [2022-01-25T10:19:54.575781566Z]: [talos] phase ephemeral (7/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:54.576124566Z]: [talos] task mountEphemeralPartition (1/1): starting 10.1.1.2: kern: notice: [2022-01-25T10:19:54.810030566Z]: XFS (sda6): Mounting V5 Filesystem 10.1.1.2: kern: notice: [2022-01-25T10:19:56.099828566Z]: XFS (sda6): Starting recovery (logdev: internal) 10.1.1.2: kern: notice: [2022-01-25T10:19:57.184233566Z]: XFS (sda6): Ending recovery (logdev: internal) 10.1.1.2: kern: warning: [2022-01-25T10:19:57.755585566Z]: xfs filesystem being mounted at /var supports timestamps until 2038 (0x7fffffff) 10.1.1.2: user: warning: [2022-01-25T10:19:57.769863566Z]: [talos] task mountEphemeralPartition (1/1): done, 3.197767572s 10.1.1.2: user: warning: [2022-01-25T10:19:57.770298566Z]: [talos] phase ephemeral (7/19): done, 3.19854255s 10.1.1.2: user: warning: [2022-01-25T10:19:57.770654566Z]: [talos] phase verifyInstall (8/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:57.771019566Z]: [talos] task verifyInstallation (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:57.771384566Z]: [talos] task verifyInstallation (1/1): done, 374.525\xc2\xb5s 10.1.1.2: user: warning: [2022-01-25T10:19:57.771766566Z]: [talos] phase verifyInstall (8/19): done, 1.114205ms 10.1.1.2: user: warning: [2022-01-25T10:19:57.772176566Z]: [talos] phase var (9/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:57.772495566Z]: [talos] task setupVarDirectory (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:58.466419566Z]: [talos] task setupVarDirectory (1/1): done, 694.689434ms 10.1.1.2: user: warning: [2022-01-25T10:19:58.466835566Z]: [talos] phase var (9/19): done, 695.428202ms 10.1.1.2: user: warning: [2022-01-25T10:19:58.467170566Z]: [talos] phase overlay (10/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:58.467515566Z]: [talos] task mountOverlayFilesystems (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:58.916696566Z]: [talos] task mountOverlayFilesystems (1/1): done, 449.661764ms 10.1.1.2: user: warning: [2022-01-25T10:19:58.917142566Z]: [talos] phase overlay (10/19): done, 450.451337ms 10.1.1.2: user: warning: [2022-01-25T10:19:58.917500566Z]: [talos] phase udevSetup (11/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:58.917844566Z]: [talos] task writeUdevRules (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:58.918193566Z]: [talos] task writeUdevRules (1/1): done, 361.275\xc2\xb5s 10.1.1.2: user: warning: [2022-01-25T10:19:58.918580566Z]: [talos] phase udevSetup (11/19): done, 1.082414ms 10.1.1.2: user: warning: [2022-01-25T10:19:58.918944566Z]: [talos] phase udevd (12/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:58.919262566Z]: [talos] task startUdevd (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:19:58.919592566Z]: [talos] service[udevd](Preparing): Running pre state 10.1.1.2: user: warning: [2022-01-25T10:19:59.121342566Z]: [talos] service[udevd](Preparing): Creating service runner 10.1.1.2: daemon: info: [2022-01-25T10:19:59.129229566Z]: udevd[684]: starting version 3.2.10 10.1.1.2: user: warning: [2022-01-25T10:19:59.129814566Z]: [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 684 10.1.1.2: daemon: info: [2022-01-25T10:19:59.140003566Z]: udevd[684]: starting eudev-3.2.10 10.1.1.2: user: warning: [2022-01-25T10:19:59.144172566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:19:59.151084566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:19:59.890780566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:00.059311566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:00.511734566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:01.192809566Z]: [talos] service[udevd](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:20:01.193274566Z]: [talos] task startUdevd (1/1): done, 2.276244216s 10.1.1.2: user: warning: [2022-01-25T10:20:01.193642566Z]: [talos] phase udevd (12/19): done, 2.27692855s 10.1.1.2: user: warning: [2022-01-25T10:20:01.193983566Z]: [talos] phase userDisks (13/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:20:01.194334566Z]: [talos] task mountUserDisks (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:20:01.212638566Z]: [talos] task mountUserDisks (1/1): skipping setup of "/dev/sdb", found existing partitions 10.1.1.2: kern: notice: [2022-01-25T10:20:01.242360566Z]: XFS (sdb1): Mounting V5 Filesystem 10.1.1.2: kern: info: [2022-01-25T10:20:01.509464566Z]: XFS (sdb1): Ending clean mount 10.1.1.2: kern: warning: [2022-01-25T10:20:01.514186566Z]: xfs filesystem being mounted at /var/mnt/sdb supports timestamps until 2038 (0x7fffffff) 10.1.1.2: user: warning: [2022-01-25T10:20:01.514812566Z]: [talos] task mountUserDisks (1/1): done, 320.771294ms 10.1.1.2: user: warning: [2022-01-25T10:20:01.515246566Z]: [talos] phase userDisks (13/19): done, 321.5554ms 10.1.1.2: user: warning: [2022-01-25T10:20:01.515640566Z]: [talos] phase userSetup (14/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:20:01.516049566Z]: [talos] task writeUserFiles (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:20:01.546595566Z]: [talos] task writeUserFiles (1/1): done, 30.57646ms 10.1.1.2: user: warning: [2022-01-25T10:20:01.547127566Z]: [talos] phase userSetup (14/19): done, 31.515748ms 10.1.1.2: user: warning: [2022-01-25T10:20:01.547634566Z]: [talos] phase lvm (15/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:20:01.548086566Z]: [talos] task activateLogicalVolumes (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:20:01.716876566Z]: [talos] task activateLogicalVolumes (1/1): done, 168.942358ms 10.1.1.2: user: warning: [2022-01-25T10:20:01.717336566Z]: [talos] phase lvm (15/19): done, 169.854095ms 10.1.1.2: user: warning: [2022-01-25T10:20:01.717673566Z]: [talos] phase startEverything (16/19): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:20:01.718050566Z]: [talos] task startAllServices (1/1): starting 10.1.1.2: user: warning: [2022-01-25T10:20:01.718408566Z]: [talos] task startAllServices (1/1): waiting for 7 services 10.1.1.2: user: warning: [2022-01-25T10:20:01.718807566Z]: [talos] service[apid](Waiting): Waiting for service "containerd" to be "up", api certificates 10.1.1.2: user: warning: [2022-01-25T10:20:01.719407566Z]: [talos] service[etcd](Waiting): Waiting for service "cri" to be "up", time sync, network 10.1.1.2: user: warning: [2022-01-25T10:20:01.719989566Z]: [talos] service[cri](Waiting): Waiting for network 10.1.1.2: user: warning: [2022-01-25T10:20:01.720885566Z]: [talos] service[cri](Preparing): Running pre state 10.1.1.2: user: warning: [2022-01-25T10:20:01.721268566Z]: [talos] service[trustd](Waiting): Waiting for service "containerd" to be "up", time sync, network 10.1.1.2: user: warning: [2022-01-25T10:20:01.721855566Z]: [talos] service[apid](Preparing): Running pre state 10.1.1.2: user: warning: [2022-01-25T10:20:01.722219566Z]: [talos] service[cri](Preparing): Creating service runner 10.1.1.2: user: warning: [2022-01-25T10:20:01.722612566Z]: [talos] service[trustd](Preparing): Running pre state 10.1.1.2: user: warning: [2022-01-25T10:20:01.723029566Z]: [talos] service[trustd](Preparing): Creating service runner 10.1.1.2: user: warning: [2022-01-25T10:20:01.725309566Z]: [talos] service[apid](Preparing): Creating service runner 10.1.1.2: user: warning: [2022-01-25T10:20:01.725964566Z]: [talos] service[cri](Running): Process Process(["/bin/containerd" "--address" "/run/containerd/containerd.sock" "--config" "/etc/cri/containerd.toml"]) started with PID 1121 10.1.1.2: user: warning: [2022-01-25T10:20:01.785689566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:02.112296566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:02.719245566Z]: [talos] service[etcd](Waiting): Waiting for service "cri" to be "up" 10.1.1.2: user: warning: [2022-01-25T10:20:02.901476566Z]: [talos] service[apid](Running): Started task apid (PID 1192) for container apid 10.1.1.2: user: warning: [2022-01-25T10:20:02.903341566Z]: [talos] service[trustd](Running): Started task trustd (PID 1193) for container trustd 10.1.1.2: user: warning: [2022-01-25T10:20:03.223349566Z]: [talos] service[cri](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:20:03.381397566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:05.850058566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:06.580513566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:07.720379566Z]: [talos] service[cri](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:20:07.720873566Z]: [talos] service[etcd](Preparing): Running pre state 10.1.1.2: user: warning: [2022-01-25T10:20:07.721891566Z]: [talos] service[kubelet](Waiting): Waiting for service "cri" to be "up", time sync, network 10.1.1.2: user: warning: [2022-01-25T10:20:07.722514566Z]: [talos] service[kubelet](Preparing): Running pre state 10.1.1.2: user: warning: [2022-01-25T10:20:07.734080566Z]: [talos] service[apid](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:20:07.735672566Z]: [talos] service[trustd](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:20:10.607320566Z]: [talos] service[kubelet](Preparing): Creating service runner 10.1.1.2: user: warning: [2022-01-25T10:20:10.903706566Z]: [talos] service[etcd](Preparing): Creating service runner 10.1.1.2: user: warning: [2022-01-25T10:20:11.089496566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:14.685221566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:18.888629566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:18.984498566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:28.714982566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:29.332589566Z]: [talos] service[etcd](Running): Started task etcd (PID 1267) for container etcd 10.1.1.2: user: warning: [2022-01-25T10:20:33.348735566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:36.901840566Z]: [talos] service[etcd](Running): Health check failed: error building etcd client: context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:20:39.262402566Z]: [talos] service[kubelet](Running): Started task kubelet (PID 1309) for container kubelet 10.1.1.2: user: warning: [2022-01-25T10:20:48.338854566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: Get\x5c"https://127.0.0.1:10250/pods/?timeout=30s\x5c": dial tcp 127.0.0.1:10250: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:20:52.111003566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:21:04.011224566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: Get\x5c"https://127.0.0.1:10250/pods/?timeout=30s\x5c": dial tcp 127.0.0.1:10250: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:21:07.577131566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:21:19.810466566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: Get\x5c"https://127.0.0.1:10250/pods/?timeout=30s\x5c": dial tcp 127.0.0.1:10250: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:21:20.611041566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:21:35.483717566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: Get\x5c"https://127.0.0.1:10250/pods/?timeout=30s\x5c": dial tcp 127.0.0.1:10250: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:21:47.595484566Z]: [talos] service[kubelet](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:21:51.813290566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:21:53.879826566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:22:08.635309566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:22:11.475805566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:22:27.433074566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:22:28.848776566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:22:52.114245566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:23:05.518161566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:23:18.825525566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:23:19.775689566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:23:49.107280566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:23:50.667765566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:24:26.849208566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:24:28.882749566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:24:32.958532566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: kern: warning: [2022-01-25T10:25:13.781090566Z]: init invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0 10.1.1.2: kern: warning: [2022-01-25T10:25:13.781665566Z]: CPU: 0 PID: 636 Comm: init Tainted: G T 5.15.6-talos #1 10.1.1.2: kern: warning: [2022-01-25T10:25:13.782113566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 10.1.1.2: kern: warning: [2022-01-25T10:25:13.782733566Z]: Call Trace: 10.1.1.2: kern: warning: [2022-01-25T10:25:13.782925566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:25:13.783096566Z]: dump_stack_lvl+0x46/0x5a 10.1.1.2: kern: warning: [2022-01-25T10:25:13.783335566Z]: dump_header+0x45/0x1ee 10.1.1.2: kern: warning: [2022-01-25T10:25:13.783562566Z]: oom_kill_process.cold+0xb/0x10 10.1.1.2: kern: warning: [2022-01-25T10:25:13.783820566Z]: out_of_memory+0x22f/0x4e0 10.1.1.2: kern: warning: [2022-01-25T10:25:13.784062566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90 10.1.1.2: kern: warning: [2022-01-25T10:25:13.784383566Z]: __alloc_pages+0x307/0x320 10.1.1.2: kern: warning: [2022-01-25T10:25:13.784652566Z]: pagecache_get_page+0x120/0x3c0 10.1.1.2: kern: warning: [2022-01-25T10:25:13.784910566Z]: filemap_fault+0x5d3/0x920 10.1.1.2: kern: warning: [2022-01-25T10:25:13.785155566Z]: ? filemap_map_pages+0x2b0/0x440 10.1.1.2: kern: warning: [2022-01-25T10:25:13.785417566Z]: __do_fault+0x2f/0x90 10.1.1.2: kern: warning: [2022-01-25T10:25:13.785637566Z]: __handle_mm_fault+0x664/0xc00 10.1.1.2: kern: warning: [2022-01-25T10:25:13.785891566Z]: handle_mm_fault+0xc7/0x290 10.1.1.2: kern: warning: [2022-01-25T10:25:13.786139566Z]: exc_page_fault+0x1de/0x780 10.1.1.2: kern: warning: [2022-01-25T10:25:13.786381566Z]: ? asm_exc_page_fault+0x8/0x30 10.1.1.2: kern: warning: [2022-01-25T10:25:13.786635566Z]: asm_exc_page_fault+0x1e/0x30 10.1.1.2: kern: warning: [2022-01-25T10:25:13.786885566Z]: RIP: 0033:0x466d5d 10.1.1.2: kern: warning: [2022-01-25T10:25:13.787108566Z]: Code: Unable to access opcode bytes at RIP 0x466d33. 10.1.1.2: kern: warning: [2022-01-25T10:25:13.787442566Z]: RSP: 002b:000000c000071f10 EFLAGS: 00010212 10.1.1.2: kern: warning: [2022-01-25T10:25:13.787744566Z]: RAX: 0000000000000000 RBX: 0000000000002710 RCX: 0000000000466d5d 10.1.1.2: kern: warning: [2022-01-25T10:25:13.788131566Z]: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000c000071f10 10.1.1.2: kern: warning: [2022-01-25T10:25:13.788517566Z]: RBP: 000000c000071f20 R08: 00000000000023a3 R09: 0000000000000000 10.1.1.2: kern: warning: [2022-01-25T10:25:13.788904566Z]: R10: 0000000000000000 R11: 0000000000000212 R12: 000000c000071950 10.1.1.2: kern: warning: [2022-01-25T10:25:13.789295566Z]: R13: 000000c00076e000 R14: 000000c0000004e0 R15: 00007f91dcfcd6dd 10.1.1.2: kern: warning: [2022-01-25T10:25:13.789682566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:25:13.789860566Z]: Mem-Info: 10.1.1.2: kern: warning: [2022-01-25T10:25:13.790034566Z]: active_anon:12625 inactive_anon:470303 isolated_anon:0\x0a active_file:0 inactive_file:293 isolated_file:2\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:5057 slab_unreclaimable:4624\x0a mapped:35 shmem:12651 pagetables:1405 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3220 free_pcp:891 free_cma:0 10.1.1.2: kern: warning: [2022-01-25T10:25:13.791966566Z]: Node 0 active_anon:50500kB inactive_anon:1881212kB active_file:0kB inactive_file:1172kB unevictable:0kB isolated(anon):0kB isolated(file):124kB mapped:140kB dirty:0kB writeback:0kB shmem:50604kB writeback_tmp:0kB kernel_stack:3568kB pagetables:5620kB all_unreclaimable? yes 10.1.1.2: kern: warning: [2022-01-25T10:25:13.793264566Z]: Node 0 DMA free:7592kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7744kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:25:13.794556566Z]: lowmem_reserve[]: 0 1890 1890 1890 10.1.1.2: kern: warning: [2022-01-25T10:25:13.794872566Z]: Node 0 DMA32 free:5288kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50500kB inactive_anon:1873468kB active_file:476kB inactive_file:1172kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:3564kB local_pcp:3496kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:25:13.796310566Z]: lowmem_reserve[]: 0 0 0 0 10.1.1.2: kern: warning: [2022-01-25T10:25:13.796543566Z]: Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7592kB 10.1.1.2: kern: warning: [2022-01-25T10:25:13.797261566Z]: Node 0 DMA32: 71*4kB (UME) 94*8kB (UE) 85*16kB (UE) 25*32kB (UME) 16*64kB (UME) 3*128kB (M) 1*256kB (M) 1*512kB (M) 0*1024kB 0*2048kB 0*4096kB = 5372kB 10.1.1.2: kern: info: [2022-01-25T10:25:13.798052566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB 10.1.1.2: kern: warning: [2022-01-25T10:25:13.798530566Z]: 12965 total pagecache pages 10.1.1.2: kern: warning: [2022-01-25T10:25:13.798769566Z]: 0 pages in swap cache 10.1.1.2: kern: warning: [2022-01-25T10:25:13.798993566Z]: Swap cache stats: add 0, delete 0, find 0/0 10.1.1.2: kern: warning: [2022-01-25T10:25:13.799295566Z]: Free swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:25:13.799494566Z]: Total swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:25:13.799707566Z]: 524156 pages RAM 10.1.1.2: kern: warning: [2022-01-25T10:25:13.799905566Z]: 0 pages HighMem/MovableOnly 10.1.1.2: kern: warning: [2022-01-25T10:25:13.800161566Z]: 21678 pages reserved 10.1.1.2: kern: info: [2022-01-25T10:25:13.800374566Z]: Tasks state (memory values in pages): 10.1.1.2: kern: info: [2022-01-25T10:25:13.800660566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 10.1.1.2: kern: info: [2022-01-25T10:25:13.801153566Z]: [ 664] 0 664 188197 3989 208896 0 -999 containerd 10.1.1.2: kern: info: [2022-01-25T10:25:13.801642566Z]: [ 684] 0 684 368 38 45056 0 0 udevd 10.1.1.2: kern: info: [2022-01-25T10:25:13.802112566Z]: [ 1121] 0 1121 189038 5105 217088 0 -500 containerd 10.1.1.2: kern: info: [2022-01-25T10:25:13.802595566Z]: [ 1148] 0 1148 177754 450 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:25:13.803105566Z]: [ 1150] 0 1150 177690 498 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:25:13.803621566Z]: [ 1192] 50 1192 193598 3155 253952 0 -998 apid 10.1.1.2: kern: info: [2022-01-25T10:25:13.804099566Z]: [ 1193] 51 1193 193598 2717 249856 0 -998 trustd 10.1.1.2: kern: info: [2022-01-25T10:25:13.804569566Z]: [ 1246] 0 1246 177754 483 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:25:13.805084566Z]: [ 1267] 60 1267 3228276 439573 3747840 0 -998 etcd 10.1.1.2: kern: info: [2022-01-25T10:25:13.805545566Z]: [ 1289] 0 1289 177754 478 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:25:13.806051566Z]: [ 1309] 0 1309 447126 6055 483328 0 -999 kubelet 10.1.1.2: kern: info: [2022-01-25T10:25:13.806524566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init,mems_allowed=0,global_oom,task_memcg=/system/runtime,task=udevd,pid=684,uid=0 10.1.1.2: kern: err: [2022-01-25T10:25:13.807255566Z]: Out of memory: Killed process 684 (udevd) total-vm:1472kB, anon-rss:152kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:44kB oom_score_adj:0 10.1.1.2: kern: info: [2022-01-25T10:25:13.808013566Z]: oom_reaper: reaped process 684 (udevd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 10.1.1.2: kern: warning: [2022-01-25T10:25:23.119311566Z]: init invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0 10.1.1.2: kern: warning: [2022-01-25T10:25:23.119917566Z]: CPU: 1 PID: 636 Comm: init Tainted: G T 5.15.6-talos #1 10.1.1.2: kern: warning: [2022-01-25T10:25:23.120396566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 10.1.1.2: kern: warning: [2022-01-25T10:25:23.121069566Z]: Call Trace: 10.1.1.2: kern: warning: [2022-01-25T10:25:23.121264566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:25:23.121440566Z]: dump_stack_lvl+0x46/0x5a 10.1.1.2: kern: warning: [2022-01-25T10:25:23.121696566Z]: dump_header+0x45/0x1ee 10.1.1.2: kern: warning: [2022-01-25T10:25:23.121941566Z]: oom_kill_process.cold+0xb/0x10 10.1.1.2: kern: warning: [2022-01-25T10:25:23.122215566Z]: out_of_memory+0x22f/0x4e0 10.1.1.2: kern: warning: [2022-01-25T10:25:23.122474566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90 10.1.1.2: kern: warning: [2022-01-25T10:25:23.122818566Z]: __alloc_pages+0x307/0x320 10.1.1.2: kern: warning: [2022-01-25T10:25:23.123082566Z]: pagecache_get_page+0x120/0x3c0 10.1.1.2: kern: warning: [2022-01-25T10:25:23.123358566Z]: filemap_fault+0x5d3/0x920 10.1.1.2: kern: warning: [2022-01-25T10:25:23.123637566Z]: ? filemap_map_pages+0x2b0/0x440 10.1.1.2: kern: warning: [2022-01-25T10:25:23.123923566Z]: __do_fault+0x2f/0x90 10.1.1.2: kern: warning: [2022-01-25T10:25:23.124160566Z]: __handle_mm_fault+0x664/0xc00 10.1.1.2: kern: warning: [2022-01-25T10:25:23.124430566Z]: handle_mm_fault+0xc7/0x290 10.1.1.2: kern: warning: [2022-01-25T10:25:23.124710566Z]: exc_page_fault+0x1de/0x780 10.1.1.2: kern: warning: [2022-01-25T10:25:23.124974566Z]: ? asm_exc_page_fault+0x8/0x30 10.1.1.2: kern: warning: [2022-01-25T10:25:23.126077566Z]: asm_exc_page_fault+0x1e/0x30 10.1.1.2: kern: warning: [2022-01-25T10:25:23.126348566Z]: RIP: 0033:0x466d5d 10.1.1.2: kern: warning: [2022-01-25T10:25:23.126575566Z]: Code: Unable to access opcode bytes at RIP 0x466d33. 10.1.1.2: kern: warning: [2022-01-25T10:25:23.126944566Z]: RSP: 002b:000000c000071f10 EFLAGS: 00010212 10.1.1.2: kern: warning: [2022-01-25T10:25:23.127268566Z]: RAX: 0000000000000000 RBX: 0000000000002710 RCX: 0000000000466d5d 10.1.1.2: kern: warning: [2022-01-25T10:25:23.127694566Z]: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000c000071f10 10.1.1.2: kern: warning: [2022-01-25T10:25:23.128111566Z]: RBP: 000000c000071f20 R08: 00000000000023a9 R09: 0000000004911458 10.1.1.2: kern: warning: [2022-01-25T10:25:23.128528566Z]: R10: 0000000000000000 R11: 0000000000000212 R12: 000000c000071950 10.1.1.2: kern: warning: [2022-01-25T10:25:23.128955566Z]: R13: 000000c00076e000 R14: 000000c0000004e0 R15: 00007f91dcfcd6dd 10.1.1.2: kern: warning: [2022-01-25T10:25:23.129373566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:25:23.129564566Z]: Mem-Info: 10.1.1.2: kern: warning: [2022-01-25T10:25:23.129748566Z]: active_anon:12624 inactive_anon:470266 isolated_anon:0\x0a active_file:11 inactive_file:233 isolated_file:0\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:5056 slab_unreclaimable:4615\x0a mapped:22 shmem:12651 pagetables:1397 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3252 free_pcp:924 free_cma:0 10.1.1.2: kern: warning: [2022-01-25T10:25:23.131765566Z]: Node 0 active_anon:50496kB inactive_anon:1881064kB active_file:44kB inactive_file:808kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:88kB dirty:0kB writeback:0kB shmem:50604kB writeback_tmp:0kB kernel_stack:3568kB pagetables:5588kB all_unreclaimable? no 10.1.1.2: kern: warning: [2022-01-25T10:25:23.133155566Z]: Node 0 DMA free:7592kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7744kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:25:23.134561566Z]: lowmem_reserve[]: 0 1890 1890 1890 10.1.1.2: kern: warning: [2022-01-25T10:25:23.134852566Z]: Node 0 DMA32 free:5416kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50496kB inactive_anon:1873320kB active_file:184kB inactive_file:1288kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:3848kB local_pcp:3576kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:25:23.136365566Z]: lowmem_reserve[]: 0 0 0 0 10.1.1.2: kern: warning: [2022-01-25T10:25:23.136624566Z]: Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7592kB 10.1.1.2: kern: warning: [2022-01-25T10:25:23.137578566Z]: Node 0 DMA32: 67*4kB (UME) 108*8kB (UME) 75*16kB (UME) 29*32kB (UME) 20*64kB (UME) 3*128kB (M) 1*256kB (M) 1*512kB (M) 0*1024kB 0*2048kB 0*4096kB = 5692kB 10.1.1.2: kern: info: [2022-01-25T10:25:23.138740566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB 10.1.1.2: kern: warning: [2022-01-25T10:25:23.139397566Z]: 12887 total pagecache pages 10.1.1.2: kern: warning: [2022-01-25T10:25:23.139726566Z]: 0 pages in swap cache 10.1.1.2: kern: warning: [2022-01-25T10:25:23.140024566Z]: Swap cache stats: add 0, delete 0, find 0/0 10.1.1.2: kern: warning: [2022-01-25T10:25:23.140446566Z]: Free swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:25:23.140736566Z]: Total swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:25:23.141014566Z]: 524156 pages RAM 10.1.1.2: kern: warning: [2022-01-25T10:25:23.141294566Z]: 0 pages HighMem/MovableOnly 10.1.1.2: kern: warning: [2022-01-25T10:25:23.141628566Z]: 21678 pages reserved 10.1.1.2: kern: info: [2022-01-25T10:25:23.141927566Z]: Tasks state (memory values in pages): 10.1.1.2: kern: info: [2022-01-25T10:25:23.142319566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 10.1.1.2: kern: info: [2022-01-25T10:25:23.143025566Z]: [ 664] 0 664 188197 3989 208896 0 -999 containerd 10.1.1.2: kern: info: [2022-01-25T10:25:23.143710566Z]: [ 1121] 0 1121 189038 5105 217088 0 -500 containerd 10.1.1.2: kern: info: [2022-01-25T10:25:23.144396566Z]: [ 1148] 0 1148 177754 450 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:25:23.145116566Z]: [ 1150] 0 1150 177690 498 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:25:23.145829566Z]: [ 1192] 50 1192 193598 3155 253952 0 -998 apid 10.1.1.2: kern: info: [2022-01-25T10:25:23.146472566Z]: [ 1193] 51 1193 193598 2718 249856 0 -998 trustd 10.1.1.2: kern: info: [2022-01-25T10:25:23.147138566Z]: [ 1246] 0 1246 177754 483 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:25:23.147840566Z]: [ 1267] 60 1267 3228276 439573 3747840 0 -998 etcd 10.1.1.2: kern: info: [2022-01-25T10:25:23.148495566Z]: [ 1289] 0 1289 177754 478 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:25:23.149203566Z]: [ 1309] 0 1309 447126 6055 483328 0 -999 kubelet 10.1.1.2: kern: info: [2022-01-25T10:25:23.149871566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init,mems_allowed=0,global_oom,task_memcg=/system/etcd,task=etcd,pid=1267,uid=60 10.1.1.2: kern: err: [2022-01-25T10:25:23.150871566Z]: Out of memory: Killed process 1267 (etcd) total-vm:12913104kB, anon-rss:1758292kB, file-rss:0kB, shmem-rss:0kB, UID:60 pgtables:3660kB oom_score_adj:-998 10.1.1.2: kern: info: [2022-01-25T10:25:23.213190566Z]: oom_reaper: reaped process 1267 (etcd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 10.1.1.2: user: warning: [2022-01-25T10:25:23.842676566Z]: [talos] service[kubelet](Running): Health check failed: Get "http://127.0.0.1:10248/healthz": context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:25:23.883680566Z]: [talos] service[apid](Running): Health check failed: dial tcp 127.0.0.1:50000: i/o timeout 10.1.1.2: user: warning: [2022-01-25T10:25:23.955885566Z]: [talos] service[apid](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:25:23.967203566Z]: [talos] service[udevd](Waiting): Error running Process(["/sbin/udevd" "--resolve-names=never"]), going to restart forever: signal: killed 10.1.1.2: user: warning: [2022-01-25T10:25:23.980306566Z]: [talos] service[trustd](Running): Health check failed: dial tcp 127.0.0.1:50001: i/o timeout 10.1.1.2: user: warning: [2022-01-25T10:25:23.992462566Z]: [talos] service[trustd](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:25:24.004670566Z]: [talos] service[cri](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:25:24.007288566Z]: [talos] service[containerd](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:25:24.038872566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:25:24.040517566Z]: [talos] service[containerd](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:25:24.057104566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:25:24.291558566Z]: [talos] service[cri](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:25:25.181827566Z]: [talos] service[etcd](Waiting): Error running Containerd(etcd), going to restart forever: task "etcd" failed: exit code 137 10.1.1.2: daemon: info: [2022-01-25T10:25:28.996296566Z]: udevd[1448]: starting version 3.2.10 10.1.1.2: user: warning: [2022-01-25T10:25:28.996759566Z]: [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 1448 10.1.1.2: daemon: info: [2022-01-25T10:25:29.293856566Z]: udevd[1448]: starting eudev-3.2.10 10.1.1.2: user: warning: [2022-01-25T10:25:31.949339566Z]: [talos] service[etcd](Running): Started task etcd (PID 1476) for container etcd 10.1.1.2: user: warning: [2022-01-25T10:25:32.604380566Z]: [talos] service[kubelet](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:25:32.681189566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:26:02.439205566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:26:27.647951566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:26:34.668349566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:26:56.415793566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:27:30.979187566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:27:34.732979566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:27:52.871322566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:28:25.311179566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:28:25.736054566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:28:51.958061566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:29:24.306740566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: kern: warning: [2022-01-25T10:30:46.435024566Z]: containerd-shim invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-499 10.1.1.2: kern: warning: [2022-01-25T10:30:46.435696566Z]: CPU: 1 PID: 1292 Comm: containerd-shim Tainted: G T 5.15.6-talos #1 10.1.1.2: kern: warning: [2022-01-25T10:30:46.436231566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 10.1.1.2: kern: warning: [2022-01-25T10:30:46.436899566Z]: Call Trace: 10.1.1.2: kern: warning: [2022-01-25T10:30:46.437093566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:30:46.437274566Z]: dump_stack_lvl+0x46/0x5a 10.1.1.2: kern: warning: [2022-01-25T10:30:46.437535566Z]: dump_header+0x45/0x1ee 10.1.1.2: kern: warning: [2022-01-25T10:30:46.437792566Z]: oom_kill_process.cold+0xb/0x10 10.1.1.2: kern: warning: [2022-01-25T10:30:46.438073566Z]: out_of_memory+0x22f/0x4e0 10.1.1.2: kern: warning: [2022-01-25T10:30:46.438335566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90 10.1.1.2: kern: warning: [2022-01-25T10:30:46.438680566Z]: __alloc_pages+0x307/0x320 10.1.1.2: kern: warning: [2022-01-25T10:30:46.439783566Z]: pagecache_get_page+0x120/0x3c0 10.1.1.2: kern: warning: [2022-01-25T10:30:46.440063566Z]: filemap_fault+0x5d3/0x920 10.1.1.2: kern: warning: [2022-01-25T10:30:46.440319566Z]: ? filemap_map_pages+0x2b0/0x440 10.1.1.2: kern: warning: [2022-01-25T10:30:46.440606566Z]: __do_fault+0x2f/0x90 10.1.1.2: kern: warning: [2022-01-25T10:30:46.440845566Z]: __handle_mm_fault+0x664/0xc00 10.1.1.2: kern: warning: [2022-01-25T10:30:46.441120566Z]: handle_mm_fault+0xc7/0x290 10.1.1.2: kern: warning: [2022-01-25T10:30:46.441380566Z]: exc_page_fault+0x1de/0x780 10.1.1.2: kern: warning: [2022-01-25T10:30:46.441653566Z]: ? asm_exc_page_fault+0x8/0x30 10.1.1.2: kern: warning: [2022-01-25T10:30:46.441928566Z]: asm_exc_page_fault+0x1e/0x30 10.1.1.2: kern: warning: [2022-01-25T10:30:46.442197566Z]: RIP: 0033:0x45103c 10.1.1.2: kern: warning: [2022-01-25T10:30:46.442424566Z]: Code: Unable to access opcode bytes at RIP 0x451012. 10.1.1.2: kern: warning: [2022-01-25T10:30:46.442787566Z]: RSP: 002b:000000c00004f7a0 EFLAGS: 00010202 10.1.1.2: kern: warning: [2022-01-25T10:30:46.443116566Z]: RAX: 0000000000a20c9c RBX: 00000000000607c4 RCX: 00000000000607c4 10.1.1.2: kern: warning: [2022-01-25T10:30:46.443539566Z]: RDX: 00000000000607c4 RSI: 000000c00004f7f4 RDI: 000000c00004f800 10.1.1.2: kern: warning: [2022-01-25T10:30:46.443957566Z]: RBP: 000000c00004f7b0 R08: 0000000000000001 R09: 0000000000a20c9c 10.1.1.2: kern: warning: [2022-01-25T10:30:46.444375566Z]: R10: 00000000005c9780 R11: 0000000000073f5c R12: 00000000005c9780 10.1.1.2: kern: warning: [2022-01-25T10:30:46.444793566Z]: R13: 00000000000607c4 R14: 000000c0000009c0 R15: 0000000000000000 10.1.1.2: kern: warning: [2022-01-25T10:30:46.445209566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:30:46.445426566Z]: Mem-Info: 10.1.1.2: kern: warning: [2022-01-25T10:30:46.445619566Z]: active_anon:12626 inactive_anon:469829 isolated_anon:0\x0a active_file:13 inactive_file:662 isolated_file:0\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:5005 slab_unreclaimable:4634\x0a mapped:30 shmem:12651 pagetables:1407 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3771 free_pcp:449 free_cma:0 10.1.1.2: kern: warning: [2022-01-25T10:30:46.447629566Z]: Node 0 active_anon:50504kB inactive_anon:1879316kB active_file:52kB inactive_file:2420kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:204kB dirty:0kB writeback:0kBshmem:50604kB writeback_tmp:0kB kernel_stack:3616kB pagetables:5628kB all_unreclaimable? no 10.1.1.2: kern: warning: [2022-01-25T10:30:46.449027566Z]: Node 0 DMA free:7600kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7736kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:30:46.450427566Z]: lowmem_reserve[]: 0 1890 1890 1890 10.1.1.2: kern: warning: [2022-01-25T10:30:46.450717566Z]: Node 0 DMA32 free:6980kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50504kB inactive_anon:1871580kB active_file:52kB inactive_file:2652kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:2296kB local_pcp:1628kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:30:46.452255566Z]: lowmem_reserve[]: 0 0 0 0 10.1.1.2: kern: warning: [2022-01-25T10:30:46.452516566Z]: Node 0 DMA: 0*4kB 2*8kB (UM) 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7600kB 10.1.1.2: kern: warning: [2022-01-25T10:30:46.453292566Z]: Node 0 DMA32: 149*4kB (UME) 131*8kB (UME) 40*16kB (UME) 4*32kB (ME) 8*64kB (UME) 4*128kB (UE) 2*256kB (UM) 4*512kB (UM) 1*1024kB (U) 0*2048kB 0*4096kB = 7020kB 10.1.1.2: kern: info: [2022-01-25T10:30:46.454184566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB 10.1.1.2: kern: warning: [2022-01-25T10:30:46.454703566Z]: 13329 total pagecache pages 10.1.1.2: kern: warning: [2022-01-25T10:30:46.454969566Z]: 0 pages in swap cache 10.1.1.2: kern: warning: [2022-01-25T10:30:46.455207566Z]: Swap cache stats: add 0, delete 0, find 0/0 10.1.1.2: kern: warning: [2022-01-25T10:30:46.455533566Z]: Free swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:30:46.455747566Z]: Total swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:30:46.455964566Z]: 524156 pages RAM 10.1.1.2: kern: warning: [2022-01-25T10:30:46.456181566Z]: 0 pages HighMem/MovableOnly 10.1.1.2: kern: warning: [2022-01-25T10:30:46.456441566Z]: 21678 pages reserved 10.1.1.2: kern: info: [2022-01-25T10:30:46.456674566Z]: Tasks state (memory values in pages): 10.1.1.2: kern: info: [2022-01-25T10:30:46.456977566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 10.1.1.2: kern: info: [2022-01-25T10:30:46.457511566Z]: [ 664] 0 664 188197 4030 208896 0 -999 containerd 10.1.1.2: kern: info: [2022-01-25T10:30:46.458043566Z]: [ 1121] 0 1121 189038 4888 217088 0 -500 containerd 10.1.1.2: kern: info: [2022-01-25T10:30:46.458571566Z]: [ 1148] 0 1148 177754 573 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:30:46.459128566Z]: [ 1150] 0 1150 177690 593 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:30:46.459676566Z]: [ 1192] 50 1192 193598 3233 258048 0 -998 apid 10.1.1.2: kern: info: [2022-01-25T10:30:46.460183566Z]: [ 1193] 51 1193 193598 2727 249856 0 -998 trustd 10.1.1.2: kern: info: [2022-01-25T10:30:46.460693566Z]: [ 1289] 0 1289 177754 608 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:30:46.461243566Z]: [ 1309] 0 1309 465559 6442 499712 0 -999 kubelet 10.1.1.2: kern: info: [2022-01-25T10:30:46.461759566Z]: [ 1448] 0 1448 362 32 45056 0 0 udevd 10.1.1.2: kern: info: [2022-01-25T10:30:46.462282566Z]: [ 1456] 0 1456 177754 569 110592 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:30:46.462832566Z]: [ 1476] 60 1476 3228308 438392 3731456 0 -998 etcd 10.1.1.2: kern: info: [2022-01-25T10:30:46.463343566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=runtime,mems_allowed=0,global_oom,task_memcg=/system/runtime,task=udevd,pid=1448,uid=0 10.1.1.2: kern: err: [2022-01-25T10:30:46.464137566Z]: Out of memory: Killed process 1448 (udevd) total-vm:1448kB, anon-rss:124kB, file-rss:4kB, shmem-rss:0kB, UID:0 pgtables:44kB oom_score_adj:0 10.1.1.2: kern: info: [2022-01-25T10:30:46.464946566Z]: oom_reaper: reaped process 1448 (udevd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 10.1.1.2: kern: warning: [2022-01-25T10:31:35.859728566Z]: containerd-shim invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-499 10.1.1.2: kern: warning: [2022-01-25T10:31:35.860425566Z]: CPU: 1 PID: 1292 Comm: containerd-shim Tainted: G T 5.15.6-talos #1 10.1.1.2: kern: warning: [2022-01-25T10:31:35.860951566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 10.1.1.2: kern: warning: [2022-01-25T10:31:35.861633566Z]: Call Trace: 10.1.1.2: kern: warning: [2022-01-25T10:31:35.861831566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:31:35.862014566Z]: dump_stack_lvl+0x46/0x5a 10.1.1.2: kern: warning: [2022-01-25T10:31:35.862272566Z]: dump_header+0x45/0x1ee 10.1.1.2: kern: warning: [2022-01-25T10:31:35.862519566Z]: oom_kill_process.cold+0xb/0x10 10.1.1.2: kern: warning: [2022-01-25T10:31:35.862798566Z]: out_of_memory+0x22f/0x4e0 10.1.1.2: kern: warning: [2022-01-25T10:31:35.863067566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90 10.1.1.2: kern: warning: [2022-01-25T10:31:35.863415566Z]: __alloc_pages+0x307/0x320 10.1.1.2: kern: warning: [2022-01-25T10:31:35.863683566Z]: pagecache_get_page+0x120/0x3c0 10.1.1.2: kern: warning: [2022-01-25T10:31:35.863987566Z]: filemap_fault+0x5d3/0x920 10.1.1.2: kern: warning: [2022-01-25T10:31:35.864253566Z]: ? filemap_map_pages+0x2b0/0x440 10.1.1.2: kern: warning: [2022-01-25T10:31:35.864553566Z]: __do_fault+0x2f/0x90 10.1.1.2: kern: warning: [2022-01-25T10:31:35.864886566Z]: __handle_mm_fault+0x664/0xc00 10.1.1.2: kern: warning: [2022-01-25T10:31:35.865197566Z]: handle_mm_fault+0xc7/0x290 10.1.1.2: kern: warning: [2022-01-25T10:31:35.865460566Z]: exc_page_fault+0x1de/0x780 10.1.1.2: kern: warning: [2022-01-25T10:31:35.865725566Z]: ? asm_exc_page_fault+0x8/0x30 10.1.1.2: kern: warning: [2022-01-25T10:31:35.865999566Z]: asm_exc_page_fault+0x1e/0x30 10.1.1.2: kern: warning: [2022-01-25T10:31:35.866276566Z]: RIP: 0033:0x42f72f 10.1.1.2: kern: warning: [2022-01-25T10:31:35.866506566Z]: Code: Unable to access opcode bytes at RIP 0x42f705. 10.1.1.2: kern: warning: [2022-01-25T10:31:35.866868566Z]: RSP: 002b:000000c00004fe80 EFLAGS: 00010206 10.1.1.2: kern: warning: [2022-01-25T10:31:35.867201566Z]: RAX: ffffffffffffff92 RBX: 0000000000000000 RCX: 00000000004655a3 10.1.1.2: kern: warning: [2022-01-25T10:31:35.867620566Z]: RDX: 0000000000000000 RSI: 0000000000000080 RDI: 0000000000bce4f8 10.1.1.2: kern: warning: [2022-01-25T10:31:35.868039566Z]: RBP: 000000c00004fec0 R08: 0000000000000000 R09: 0000000000000000 10.1.1.2: kern: warning: [2022-01-25T10:31:35.868466566Z]: R10: 000000c00004feb0 R11: 0000000000000206 R12: 000000c00004feb0 10.1.1.2: kern: warning: [2022-01-25T10:31:35.868904566Z]: R13: 0000000000000077 R14: 000000c0000009c0 R15: 0000000000000000 10.1.1.2: kern: warning: [2022-01-25T10:31:35.869348566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:31:35.869540566Z]: Mem-Info: 10.1.1.2: kern: warning: [2022-01-25T10:31:35.869725566Z]: active_anon:12625 inactive_anon:469799 isolated_anon:0\x0a active_file:23 inactive_file:497 isolated_file:0\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:5003 slab_unreclaimable:4630\x0a mapped:29 shmem:12651 pagetables:1399 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3848 free_pcp:494 free_cma:0 10.1.1.2: kern: warning: [2022-01-25T10:31:35.871733566Z]: Node 0 active_anon:50500kB inactive_anon:1879196kB active_file:92kB inactive_file:1988kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:116kB dirty:0kB writeback:0kBshmem:50604kB writeback_tmp:0kB kernel_stack:3600kB pagetables:5596kB all_unreclaimable? yes 10.1.1.2: kern: warning: [2022-01-25T10:31:35.873143566Z]: Node 0 DMA free:7600kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7736kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:31:35.874568566Z]: lowmem_reserve[]: 0 1890 1890 1890 10.1.1.2: kern: warning: [2022-01-25T10:31:35.874869566Z]: Node 0 DMA32 free:7792kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50500kB inactive_anon:1871460kB active_file:352kB inactive_file:2068kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:1948kB local_pcp:1736kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:31:35.876417566Z]: lowmem_reserve[]: 0 0 0 0 10.1.1.2: kern: warning: [2022-01-25T10:31:35.876669566Z]: Node 0 DMA: 0*4kB 2*8kB (UM) 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7600kB 10.1.1.2: kern: warning: [2022-01-25T10:31:35.877466566Z]: Node 0 DMA32: 173*4kB (UME) 170*8kB (UME) 49*16kB (UME) 15*32kB (UME) 10*64kB (UME) 4*128kB (UE) 3*256kB (UM) 1*512kB (U) 2*1024kB (UM) 0*2048kB 0*4096kB = 7796kB 10.1.1.2: kern: info: [2022-01-25T10:31:35.878376566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB 10.1.1.2: kern: warning: [2022-01-25T10:31:35.878897566Z]: 13220 total pagecache pages 10.1.1.2: kern: warning: [2022-01-25T10:31:35.879158566Z]: 0 pages in swap cache 10.1.1.2: kern: warning: [2022-01-25T10:31:35.879397566Z]: Swap cache stats: add 0, delete 0, find 0/0 10.1.1.2: kern: warning: [2022-01-25T10:31:35.879723566Z]: Free swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:31:35.879938566Z]: Total swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:31:35.880154566Z]: 524156 pages RAM 10.1.1.2: kern: warning: [2022-01-25T10:31:35.880371566Z]: 0 pages HighMem/MovableOnly 10.1.1.2: kern: warning: [2022-01-25T10:31:35.880629566Z]: 21678 pages reserved 10.1.1.2: kern: info: [2022-01-25T10:31:35.880861566Z]: Tasks state (memory values in pages): 10.1.1.2: kern: info: [2022-01-25T10:31:35.881166566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 10.1.1.2: kern: info: [2022-01-25T10:31:35.881715566Z]: [ 664] 0 664 188197 4030 208896 0 -999 containerd 10.1.1.2: kern: info: [2022-01-25T10:31:35.882249566Z]: [ 1121] 0 1121 189038 4888 217088 0 -500 containerd 10.1.1.2: kern: info: [2022-01-25T10:31:35.882781566Z]: [ 1148] 0 1148 177754 573 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:31:35.883334566Z]: [ 1150] 0 1150 177690 593 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:31:35.883882566Z]: [ 1192] 50 1192 193598 3233 258048 0 -998 apid 10.1.1.2: kern: info: [2022-01-25T10:31:35.884395566Z]: [ 1193] 51 1193 193598 2727 249856 0 -998 trustd 10.1.1.2: kern: info: [2022-01-25T10:31:35.884905566Z]: [ 1289] 0 1289 177754 632 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:31:35.885468566Z]: [ 1309] 0 1309 465559 6442 499712 0 -999 kubelet 10.1.1.2: kern: info: [2022-01-25T10:31:35.885978566Z]: [ 1456] 0 1456 177754 569 110592 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:31:35.886541566Z]: [ 1476] 60 1476 3228308 438392 3731456 0 -998 etcd 10.1.1.2: kern: info: [2022-01-25T10:31:35.887048566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=runtime,mems_allowed=0,global_oom,task_memcg=/system/etcd,task=etcd,pid=1476,uid=60 10.1.1.2: kern: err: [2022-01-25T10:31:35.888685566Z]: Out of memory: Killed process 1476 (etcd) total-vm:12913232kB, anon-rss:1753568kB, file-rss:0kB, shmem-rss:0kB, UID:60 pgtables:3644kB oom_score_adj:-998 10.1.1.2: kern: info: [2022-01-25T10:31:35.954476566Z]: oom_reaper: reaped process 1476 (etcd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 10.1.1.2: user: warning: [2022-01-25T10:31:36.676600566Z]: [talos] service[trustd](Running): Health check failed: dial tcp 127.0.0.1:50001: i/o timeout 10.1.1.2: user: warning: [2022-01-25T10:31:36.677247566Z]: [talos] service[trustd](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:31:36.688420566Z]: [talos] service[kubelet](Running): Health check failed: Get "http://127.0.0.1:10248/healthz": context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:31:36.706932566Z]: [talos] service[cri](Running): Health check failed: failed to dial "/run/containerd/containerd.sock": context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:31:36.725433566Z]: [talos] service[udevd](Running): Health check failed: context deadline exceeded: 10.1.1.2: user: warning: [2022-01-25T10:31:36.739829566Z]: [talos] service[apid](Running): Health check failed: dial tcp 127.0.0.1:50000: i/o timeout 10.1.1.2: user: warning: [2022-01-25T10:31:36.740458566Z]: [talos] service[apid](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:31:36.749218566Z]: [talos] service[udevd](Waiting): Error running Process(["/sbin/udevd" "--resolve-names=never"]), going to restart forever: signal: killed 10.1.1.2: user: warning: [2022-01-25T10:31:36.750036566Z]: [talos] service[containerd](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:31:36.773236566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:31:36.800354566Z]: [talos] service[containerd](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:31:36.806028566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:31:36.830555566Z]: [talos] service[cri](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:31:37.694471566Z]: [talos] service[etcd](Waiting): Error running Containerd(etcd), going to restart forever: task "etcd" failed: exit code 137 10.1.1.2: user: warning: [2022-01-25T10:31:38.246726566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:31:40.412445566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: daemon: info: [2022-01-25T10:31:41.763675566Z]: udevd[1533]: starting version 3.2.10 10.1.1.2: user: warning: [2022-01-25T10:31:41.774721566Z]: [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 1533 10.1.1.2: daemon: info: [2022-01-25T10:31:42.076580566Z]: udevd[1533]: starting eudev-3.2.10 10.1.1.2: user: warning: [2022-01-25T10:31:42.683083566Z]: [talos] service[kubelet](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:31:43.164015566Z]: [talos] service[etcd](Running): Started task etcd (PID 1564) for container etcd 10.1.1.2: user: warning: [2022-01-25T10:31:43.654230566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:31:52.990479566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:31:53.447922566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:32:08.415264566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:32:18.106948566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:32:48.211414566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:33:23.254507566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:33:29.194944566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:33:43.428729566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:34:14.259306566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:34:39.603876566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:34:47.060118566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:35:07.014053566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:35:32.493553566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:35:51.333793566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: kern: warning: [2022-01-25T10:35:59.092887566Z]: containerd invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-999 10.1.1.2: kern: warning: [2022-01-25T10:35:59.093562566Z]: CPU: 1 PID: 665 Comm: containerd Tainted: G T 5.15.6-talos #1 10.1.1.2: kern: warning: [2022-01-25T10:35:59.094093566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 10.1.1.2: kern: warning: [2022-01-25T10:35:59.094798566Z]: Call Trace: 10.1.1.2: kern: warning: [2022-01-25T10:35:59.095008566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:35:59.095186566Z]: dump_stack_lvl+0x46/0x5a 10.1.1.2: kern: warning: [2022-01-25T10:35:59.095470566Z]: dump_header+0x45/0x1ee 10.1.1.2: kern: warning: [2022-01-25T10:35:59.095720566Z]: oom_kill_process.cold+0xb/0x10 10.1.1.2: kern: warning: [2022-01-25T10:35:59.096004566Z]: out_of_memory+0x22f/0x4e0 10.1.1.2: kern: warning: [2022-01-25T10:35:59.096267566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90 10.1.1.2: kern: warning: [2022-01-25T10:35:59.096624566Z]: __alloc_pages+0x307/0x320 10.1.1.2: kern: warning: [2022-01-25T10:35:59.096884566Z]: pagecache_get_page+0x120/0x3c0 10.1.1.2: kern: warning: [2022-01-25T10:35:59.097162566Z]: filemap_fault+0x5d3/0x920 10.1.1.2: kern: warning: [2022-01-25T10:35:59.097435566Z]: ? filemap_map_pages+0x2b0/0x440 10.1.1.2: kern: warning: [2022-01-25T10:35:59.097741566Z]: __do_fault+0x2f/0x90 10.1.1.2: kern: warning: [2022-01-25T10:35:59.097982566Z]: __handle_mm_fault+0x664/0xc00 10.1.1.2: kern: warning: [2022-01-25T10:35:59.098255566Z]: handle_mm_fault+0xc7/0x290 10.1.1.2: kern: warning: [2022-01-25T10:35:59.098518566Z]: exc_page_fault+0x1de/0x780 10.1.1.2: kern: warning: [2022-01-25T10:35:59.098783566Z]: ? asm_exc_page_fault+0x8/0x30 10.1.1.2: kern: warning: [2022-01-25T10:35:59.099057566Z]: asm_exc_page_fault+0x1e/0x30 10.1.1.2: kern: warning: [2022-01-25T10:35:59.099342566Z]: RIP: 0033:0x56552ce31c9d 10.1.1.2: kern: warning: [2022-01-25T10:35:59.099609566Z]: Code: Unable to access opcode bytes at RIP 0x56552ce31c73. 10.1.1.2: kern: warning: [2022-01-25T10:35:59.100037566Z]: RSP: 002b:00007f9df0174988 EFLAGS: 00010212 10.1.1.2: kern: warning: [2022-01-25T10:35:59.100377566Z]: RAX: 0000000000000000 RBX: 0000000000002710 RCX: 000056552ce31c9d 10.1.1.2: kern: warning: [2022-01-25T10:35:59.100813566Z]: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007f9df0174988 10.1.1.2: kern: warning: [2022-01-25T10:35:59.101233566Z]: RBP: 00007f9df0174998 R08: 00000000000038fc R09: 0000000000000000 10.1.1.2: kern: warning: [2022-01-25T10:35:59.101651566Z]: R10: 0000000000000000 R11: 0000000000000212 R12: 00007f9df01743c8 10.1.1.2: kern: warning: [2022-01-25T10:35:59.102070566Z]: R13: 0000000000023000 R14: 000000c0000009c0 R15: 00007f9df0174b10 10.1.1.2: kern: warning: [2022-01-25T10:35:59.102491566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:35:59.102710566Z]: Mem-Info: 10.1.1.2: kern: warning: [2022-01-25T10:35:59.102900566Z]: active_anon:12626 inactive_anon:470459 isolated_anon:0\x0a active_file:0 inactive_file:313 isolated_file:0\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:4769 slab_unreclaimable:4654\x0a mapped:48 shmem:12651 pagetables:1412 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:4082 free_pcp:42 free_cma:0 10.1.1.2: kern: warning: [2022-01-25T10:35:59.104904566Z]: Node 0 active_anon:50504kB inactive_anon:1881836kB active_file:0kB inactive_file:1252kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:192kB dirty:0kB writeback:0kB shmem:50604kB writeback_tmp:0kB kernel_stack:3648kB pagetables:5648kB all_unreclaimable? no 10.1.1.2: kern: warning: [2022-01-25T10:35:59.106403566Z]: Node 0 DMA free:7588kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7744kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:35:59.107828566Z]: lowmem_reserve[]: 0 1890 1890 1890 10.1.1.2: kern: warning: [2022-01-25T10:35:59.108125566Z]: Node 0 DMA32 free:8108kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50504kB inactive_anon:1874092kB active_file:800kB inactive_file:1232kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:292kB local_pcp:0kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:35:59.109647566Z]: lowmem_reserve[]: 0 0 0 0 10.1.1.2: kern: warning: [2022-01-25T10:35:59.109899566Z]: Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7588kB 10.1.1.2: kern: warning: [2022-01-25T10:35:59.110671566Z]: Node 0 DMA32: 315*4kB (UME) 202*8kB (UME) 64*16kB (UME) 17*32kB (ME) 8*64kB (UME) 5*128kB (UME) 1*256kB (U) 2*512kB (UM) 1*1024kB (M) 0*2048kB 0*4096kB = 7900kB 10.1.1.2: kern: info: [2022-01-25T10:35:59.111934566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB 10.1.1.2: kern: warning: [2022-01-25T10:35:59.112634566Z]: 13177 total pagecache pages 10.1.1.2: kern: warning: [2022-01-25T10:35:59.112995566Z]: 0 pages in swap cache 10.1.1.2: kern: warning: [2022-01-25T10:35:59.114473566Z]: Swap cache stats: add 0, delete 0, find 0/0 10.1.1.2: kern: warning: [2022-01-25T10:35:59.114808566Z]: Free swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:35:59.115027566Z]: Total swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:35:59.115243566Z]: 524156 pages RAM 10.1.1.2: kern: warning: [2022-01-25T10:35:59.115459566Z]: 0 pages HighMem/MovableOnly 10.1.1.2: kern: warning: [2022-01-25T10:35:59.115722566Z]: 21678 pages reserved 10.1.1.2: kern: info: [2022-01-25T10:35:59.115963566Z]: Tasks state (memory values in pages): 10.1.1.2: kern: info: [2022-01-25T10:35:59.116294566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 10.1.1.2: kern: info: [2022-01-25T10:35:59.116831566Z]: [ 664] 0 664 188197 4038 208896 0 -999 containerd 10.1.1.2: kern: info: [2022-01-25T10:35:59.117421566Z]: [ 1121] 0 1121 189038 5077 217088 0 -500 containerd 10.1.1.2: kern: info: [2022-01-25T10:35:59.117951566Z]: [ 1148] 0 1148 177754 517 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:35:59.118498566Z]: [ 1150] 0 1150 177690 537 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:35:59.119050566Z]: [ 1192] 50 1192 193598 2920 258048 0 -998 apid 10.1.1.2: kern: info: [2022-01-25T10:35:59.119551566Z]: [ 1193] 51 1193 193598 2697 249856 0 -998 trustd 10.1.1.2: kern: info: [2022-01-25T10:35:59.120071566Z]: [ 1289] 0 1289 177754 576 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:35:59.120631566Z]: [ 1309] 0 1309 483992 6601 516096 0 -999 kubelet 10.1.1.2: kern: info: [2022-01-25T10:35:59.121147566Z]: [ 1533] 0 1533 362 32 40960 0 0 udevd 10.1.1.2: kern: info: [2022-01-25T10:35:59.121660566Z]: [ 1542] 0 1542 177754 476 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:35:59.122209566Z]: [ 1564] 60 1564 3228260 439013 3743744 0 -998 etcd 10.1.1.2: kern: info: [2022-01-25T10:35:59.122709566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=runtime,mems_allowed=0,global_oom,task_memcg=/system/runtime,task=udevd,pid=1533,uid=0 10.1.1.2: kern: err: [2022-01-25T10:35:59.123507566Z]: Out of memory: Killed process 1533 (udevd) total-vm:1448kB, anon-rss:124kB, file-rss:4kB, shmem-rss:0kB, UID:0 pgtables:40kB oom_score_adj:0 10.1.1.2: kern: info: [2022-01-25T10:35:59.126226566Z]: oom_reaper: reaped process 1533 (udevd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 10.1.1.2: kern: warning: [2022-01-25T10:36:07.519924566Z]: init invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0 10.1.1.2: kern: warning: [2022-01-25T10:36:07.520527566Z]: CPU: 0 PID: 636 Comm: init Tainted: G T 5.15.6-talos #1 10.1.1.2: kern: warning: [2022-01-25T10:36:07.521010566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 10.1.1.2: kern: warning: [2022-01-25T10:36:07.521668566Z]: Call Trace: 10.1.1.2: kern: warning: [2022-01-25T10:36:07.521871566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:36:07.522048566Z]: dump_stack_lvl+0x46/0x5a 10.1.1.2: kern: warning: [2022-01-25T10:36:07.522304566Z]: dump_header+0x45/0x1ee 10.1.1.2: kern: warning: [2022-01-25T10:36:07.522550566Z]: oom_kill_process.cold+0xb/0x10 10.1.1.2: kern: warning: [2022-01-25T10:36:07.522830566Z]: out_of_memory+0x22f/0x4e0 10.1.1.2: kern: warning: [2022-01-25T10:36:07.523086566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90 10.1.1.2: kern: warning: [2022-01-25T10:36:07.523429566Z]: __alloc_pages+0x307/0x320 10.1.1.2: kern: warning: [2022-01-25T10:36:07.523685566Z]: pagecache_get_page+0x120/0x3c0 10.1.1.2: kern: warning: [2022-01-25T10:36:07.523975566Z]: filemap_fault+0x5d3/0x920 10.1.1.2: kern: warning: [2022-01-25T10:36:07.524287566Z]: ? filemap_map_pages+0x2b0/0x440 10.1.1.2: kern: warning: [2022-01-25T10:36:07.524572566Z]: __do_fault+0x2f/0x90 10.1.1.2: kern: warning: [2022-01-25T10:36:07.524815566Z]: __handle_mm_fault+0x664/0xc00 10.1.1.2: kern: warning: [2022-01-25T10:36:07.525086566Z]: handle_mm_fault+0xc7/0x290 10.1.1.2: kern: warning: [2022-01-25T10:36:07.525344566Z]: exc_page_fault+0x1de/0x780 10.1.1.2: kern: warning: [2022-01-25T10:36:07.525605566Z]: ? asm_exc_page_fault+0x8/0x30 10.1.1.2: kern: warning: [2022-01-25T10:36:07.525878566Z]: asm_exc_page_fault+0x1e/0x30 10.1.1.2: kern: warning: [2022-01-25T10:36:07.526144566Z]: RIP: 0033:0x466d5d 10.1.1.2: kern: warning: [2022-01-25T10:36:07.526371566Z]: Code: Unable to access opcode bytes at RIP 0x466d33. 10.1.1.2: kern: warning: [2022-01-25T10:36:07.526731566Z]: RSP: 002b:000000c000071f10 EFLAGS: 00010212 10.1.1.2: kern: warning: [2022-01-25T10:36:07.527059566Z]: RAX: 0000000000000000 RBX: 0000000000002710 RCX: 0000000000466d5d 10.1.1.2: kern: warning: [2022-01-25T10:36:07.527471566Z]: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000c000071f10 10.1.1.2: kern: warning: [2022-01-25T10:36:07.527881566Z]: RBP: 000000c000071f20 R08: 00000000000048eb R09: 0000000004911458 10.1.1.2: kern: warning: [2022-01-25T10:36:07.528305566Z]: R10: 0000000000000000 R11: 0000000000000212 R12: 000000c000071950 10.1.1.2: kern: warning: [2022-01-25T10:36:07.528717566Z]: R13: 000000c00076e000 R14: 000000c0000004e0 R15: 00007f91dcfcd6dd 10.1.1.2: kern: warning: [2022-01-25T10:36:07.529129566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:36:07.529328566Z]: Mem-Info: 10.1.1.2: kern: warning: [2022-01-25T10:36:07.529515566Z]: active_anon:12625 inactive_anon:470431 isolated_anon:0\x0a active_file:11 inactive_file:275 isolated_file:32\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:4769 slab_unreclaimable:4646\x0a mapped:25 shmem:12651 pagetables:1405 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3267 free_pcp:776 free_cma:0 10.1.1.2: kern: warning: [2022-01-25T10:36:07.531560566Z]: Node 0 active_anon:50500kB inactive_anon:1881724kB active_file:44kB inactive_file:1100kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:100kB dirty:0kB writeback:0kBshmem:50604kB writeback_tmp:0kB kernel_stack:3648kB pagetables:5620kB all_unreclaimable? yes 10.1.1.2: kern: warning: [2022-01-25T10:36:07.533025566Z]: Node 0 DMA free:7588kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7744kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:36:07.534526566Z]: lowmem_reserve[]: 0 1890 1890 1890 10.1.1.2: kern: warning: [2022-01-25T10:36:07.534840566Z]: Node 0 DMA32 free:5480kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50500kB inactive_anon:1873980kB active_file:264kB inactive_file:1268kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:3364kB local_pcp:3096kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:36:07.536351566Z]: lowmem_reserve[]: 0 0 0 0 10.1.1.2: kern: warning: [2022-01-25T10:36:07.536600566Z]: Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7588kB 10.1.1.2: kern: warning: [2022-01-25T10:36:07.537367566Z]: Node 0 DMA32: 176*4kB (UME) 99*8kB (UME) 43*16kB (UME) 23*32kB (UME) 12*64kB (UME) 5*128kB (UME) 1*256kB (U) 1*512kB (U) 1*1024kB (M) 0*2048kB 0*4096kB = 6120kB 10.1.1.2: kern: info: [2022-01-25T10:36:07.538275566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB 10.1.1.2: kern: warning: [2022-01-25T10:36:07.538785566Z]: 12908 total pagecache pages 10.1.1.2: kern: warning: [2022-01-25T10:36:07.539043566Z]: 0 pages in swap cache 10.1.1.2: kern: warning: [2022-01-25T10:36:07.539277566Z]: Swap cache stats: add 0, delete 0, find 0/0 10.1.1.2: kern: warning: [2022-01-25T10:36:07.539598566Z]: Free swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:36:07.539809566Z]: Total swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:36:07.540026566Z]: 524156 pages RAM 10.1.1.2: kern: warning: [2022-01-25T10:36:07.540237566Z]: 0 pages HighMem/MovableOnly 10.1.1.2: kern: warning: [2022-01-25T10:36:07.540493566Z]: 21678 pages reserved 10.1.1.2: kern: info: [2022-01-25T10:36:07.540721566Z]: Tasks state (memory values in pages): 10.1.1.2: kern: info: [2022-01-25T10:36:07.541018566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 10.1.1.2: kern: info: [2022-01-25T10:36:07.541547566Z]: [ 664] 0 664 188197 4038 208896 0 -999 containerd 10.1.1.2: kern: info: [2022-01-25T10:36:07.542073566Z]: [ 1121] 0 1121 189038 5077 217088 0 -500 containerd 10.1.1.2: kern: info: [2022-01-25T10:36:07.542592566Z]: [ 1148] 0 1148 177754 525 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:36:07.543135566Z]: [ 1150] 0 1150 177690 574 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:36:07.543673566Z]: [ 1192] 50 1192 193598 2920 258048 0 -998 apid 10.1.1.2: kern: info: [2022-01-25T10:36:07.544174566Z]: [ 1193] 51 1193 193598 2697 249856 0 -998 trustd 10.1.1.2: kern: info: [2022-01-25T10:36:07.544677566Z]: [ 1289] 0 1289 177754 608 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:36:07.545227566Z]: [ 1309] 0 1309 483992 6601 516096 0 -999 kubelet 10.1.1.2: kern: info: [2022-01-25T10:36:07.545735566Z]: [ 1542] 0 1542 177754 476 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:36:07.546283566Z]: [ 1564] 60 1564 3228260 439015 3743744 0 -998 etcd 10.1.1.2: kern: info: [2022-01-25T10:36:07.546779566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init,mems_allowed=0,global_oom,task_memcg=/system/etcd,task=etcd,pid=1564,uid=60 10.1.1.2: kern: err: [2022-01-25T10:36:07.547557566Z]: Out of memory: Killed process 1564 (etcd) total-vm:12913040kB, anon-rss:1756060kB, file-rss:0kB, shmem-rss:0kB, UID:60 pgtables:3656kB oom_score_adj:-998 10.1.1.2: kern: info: [2022-01-25T10:36:07.623977566Z]: oom_reaper: reaped process 1564 (etcd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 10.1.1.2: user: warning: [2022-01-25T10:36:08.237973566Z]: [talos] service[kubelet](Running): Health check failed: Get "http://127.0.0.1:10248/healthz": context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:36:08.242319566Z]: [talos] service[udevd](Waiting): Error running Process(["/sbin/udevd" "--resolve-names=never"]), going to restart forever: signal: killed 10.1.1.2: user: warning: [2022-01-25T10:36:08.270413566Z]: [talos] service[containerd](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:36:08.288671566Z]: [talos] service[containerd](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:36:08.374729566Z]: [talos] service[cri](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:36:08.755950566Z]: [talos] service[cri](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:36:09.370668566Z]: [talos] service[etcd](Waiting): Error running Containerd(etcd), going to restart forever: task "etcd" failed: exit code 137 10.1.1.2: daemon: info: [2022-01-25T10:36:13.259395566Z]: udevd[1620]: starting version 3.2.10 10.1.1.2: user: warning: [2022-01-25T10:36:13.273182566Z]: [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 1620 10.1.1.2: daemon: info: [2022-01-25T10:36:13.324745566Z]: udevd[1620]: starting eudev-3.2.10 10.1.1.2: user: warning: [2022-01-25T10:36:15.638078566Z]: [talos] service[etcd](Running): Started task etcd (PID 1650) for container etcd 10.1.1.2: user: warning: [2022-01-25T10:36:17.438818566Z]: [talos] service[kubelet](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:36:23.940888566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:36:29.648124566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:37:12.443080566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:37:15.238735566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:37:48.443314566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:37:58.790974566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:38:21.382821566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:38:45.415435566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:39:24.192809566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} 10.1.1.2: user: warning: [2022-01-25T10:39:36.239241566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:39:38.168047566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:39:57.951717566Z]: [talos] service[kubelet](Running): Health check failed: Get "http://127.0.0.1:10248/healthz": context deadline exceeded 10.1.1.2: kern: warning: [2022-01-25T10:40:16.865601566Z]: kubelet invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-999 10.1.1.2: kern: warning: [2022-01-25T10:40:16.866232566Z]: CPU: 1 PID: 1337 Comm: kubelet Tainted: G T 5.15.6-talos #1 10.1.1.2: kern: warning: [2022-01-25T10:40:16.866727566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 10.1.1.2: kern: warning: [2022-01-25T10:40:16.867399566Z]: Call Trace: 10.1.1.2: kern: warning: [2022-01-25T10:40:16.867594566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:40:16.867773566Z]: dump_stack_lvl+0x46/0x5a 10.1.1.2: kern: warning: [2022-01-25T10:40:16.868030566Z]: dump_header+0x45/0x1ee 10.1.1.2: kern: warning: [2022-01-25T10:40:16.868276566Z]: oom_kill_process.cold+0xb/0x10 10.1.1.2: kern: warning: [2022-01-25T10:40:16.868562566Z]: out_of_memory+0x22f/0x4e0 10.1.1.2: kern: warning: [2022-01-25T10:40:16.868820566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90 10.1.1.2: kern: warning: [2022-01-25T10:40:16.869167566Z]: __alloc_pages+0x307/0x320 10.1.1.2: kern: warning: [2022-01-25T10:40:16.869426566Z]: pagecache_get_page+0x120/0x3c0 10.1.1.2: kern: warning: [2022-01-25T10:40:16.869704566Z]: filemap_fault+0x5d3/0x920 10.1.1.2: kern: warning: [2022-01-25T10:40:16.869960566Z]: __do_fault+0x2f/0x90 10.1.1.2: kern: warning: [2022-01-25T10:40:16.870200566Z]: __handle_mm_fault+0x664/0xc00 10.1.1.2: kern: warning: [2022-01-25T10:40:16.870476566Z]: handle_mm_fault+0xc7/0x290 10.1.1.2: kern: warning: [2022-01-25T10:40:16.870735566Z]: exc_page_fault+0x1de/0x780 10.1.1.2: kern: warning: [2022-01-25T10:40:16.871003566Z]: ? asm_exc_page_fault+0x8/0x30 10.1.1.2: kern: warning: [2022-01-25T10:40:16.871278566Z]: asm_exc_page_fault+0x1e/0x30 10.1.1.2: kern: warning: [2022-01-25T10:40:16.871562566Z]: RIP: 0033:0x9b45d5 10.1.1.2: kern: warning: [2022-01-25T10:40:16.871789566Z]: Code: Unable to access opcode bytes at RIP 0x9b45ab. 10.1.1.2: kern: warning: [2022-01-25T10:40:16.872152566Z]: RSP: 002b:000000c000b436d8 EFLAGS: 00010246 10.1.1.2: kern: warning: [2022-01-25T10:40:16.872481566Z]: RAX: 000000c00009bcc0 RBX: 000000c000b4370e RCX: 000000c000385380 10.1.1.2: kern: warning: [2022-01-25T10:40:16.872897566Z]: RDX: 0000000005088bb0 RSI: 000000c0001034a0 RDI: 00000000009b4679 10.1.1.2: kern: warning: [2022-01-25T10:40:16.873315566Z]: RBP: 000000c000b437d0 R08: 000000c0001034a0 R09: 0000000000000018 10.1.1.2: kern: warning: [2022-01-25T10:40:16.873737566Z]: R10: 0000000000000020 R11: 0000000000000001 R12: 000000c000385380 10.1.1.2: kern: warning: [2022-01-25T10:40:16.874154566Z]: R13: 0000000000000000 R14: 000000c0007d7380 R15: 0000000000000000 10.1.1.2: kern: warning: [2022-01-25T10:40:16.874571566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:40:16.874765566Z]: Mem-Info: 10.1.1.2: kern: warning: [2022-01-25T10:40:16.874955566Z]: active_anon:12626 inactive_anon:470125 isolated_anon:0\x0a active_file:0 inactive_file:275 isolated_file:32\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:4966 slab_unreclaimable:4670\x0a mapped:30 shmem:12651 pagetables:1411 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3265 free_pcp:916 free_cma:0 10.1.1.2: kern: warning: [2022-01-25T10:40:16.876965566Z]: Node 0 active_anon:50504kB inactive_anon:1880500kB active_file:0kB inactive_file:1100kB unevictable:0kB isolated(anon):0kB isolated(file):128kB mapped:120kB dirty:0kB writeback:0kB shmem:50604kB writeback_tmp:0kB kernel_stack:3632kB pagetables:5644kB all_unreclaimable? no 10.1.1.2: kern: warning: [2022-01-25T10:40:16.878366566Z]: Node 0 DMA free:7600kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7716kB active_file:0kB inactive_file:8kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:40:16.879787566Z]: lowmem_reserve[]: 0 1890 1890 1890 10.1.1.2: kern: warning: [2022-01-25T10:40:16.880077566Z]: Node 0 DMA32 free:5460kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50504kB inactive_anon:1872784kB active_file:152kB inactive_file:1040kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:3664kB local_pcp:4kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:40:16.881629566Z]: lowmem_reserve[]: 0 0 0 0 10.1.1.2: kern: warning: [2022-01-25T10:40:16.881901566Z]: Node 0 DMA: 2*4kB (UM) 1*8kB (U) 2*16kB (UM) 0*32kB 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7600kB 10.1.1.2: kern: warning: [2022-01-25T10:40:16.882699566Z]: Node 0 DMA32: 157*4kB (UME) 49*8kB (UME) 34*16kB (UME) 11*32kB (UME) 10*64kB (UME) 5*128kB (UME) 0*256kB 1*512kB (U) 0*1024kB 1*2048kB (M) 0*4096kB = 5756kB 10.1.1.2: kern: info: [2022-01-25T10:40:16.883589566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB 10.1.1.2: kern: warning: [2022-01-25T10:40:16.884116566Z]: 12987 total pagecache pages 10.1.1.2: kern: warning: [2022-01-25T10:40:16.884376566Z]: 0 pages in swap cache 10.1.1.2: kern: warning: [2022-01-25T10:40:16.884612566Z]: Swap cache stats: add 0, delete 0, find 0/0 10.1.1.2: kern: warning: [2022-01-25T10:40:16.884936566Z]: Free swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:40:16.885150566Z]: Total swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:40:16.885366566Z]: 524156 pages RAM 10.1.1.2: kern: warning: [2022-01-25T10:40:16.885581566Z]: 0 pages HighMem/MovableOnly 10.1.1.2: kern: warning: [2022-01-25T10:40:16.885840566Z]: 21678 pages reserved 10.1.1.2: kern: info: [2022-01-25T10:40:16.886071566Z]: Tasks state (memory values in pages): 10.1.1.2: kern: info: [2022-01-25T10:40:16.886372566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 10.1.1.2: kern: info: [2022-01-25T10:40:16.886940566Z]: [ 664] 0 664 188197 4105 208896 0 -999 containerd 10.1.1.2: kern: info: [2022-01-25T10:40:16.887481566Z]: [ 1121] 0 1121 189038 4902 217088 0 -500 containerd 10.1.1.2: kern: info: [2022-01-25T10:40:16.888013566Z]: [ 1148] 0 1148 177754 558 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:40:16.888564566Z]: [ 1150] 0 1150 177690 614 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:40:16.889110566Z]: [ 1192] 50 1192 193598 2894 258048 0 -998 apid 10.1.1.2: kern: info: [2022-01-25T10:40:16.889615566Z]: [ 1193] 51 1193 193598 2697 249856 0 -998 trustd 10.1.1.2: kern: info: [2022-01-25T10:40:16.890122566Z]: [ 1289] 0 1289 177754 624 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:40:16.890673566Z]: [ 1309] 0 1309 483992 6661 516096 0 -999 kubelet 10.1.1.2: kern: info: [2022-01-25T10:40:16.891200566Z]: [ 1620] 0 1620 362 31 36864 0 0 udevd 10.1.1.2: kern: info: [2022-01-25T10:40:16.891742566Z]: [ 1629] 0 1629 177690 496 98304 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:40:16.892293566Z]: [ 1650] 60 1650 3228292 438734 3739648 0 -998 etcd 10.1.1.2: kern: info: [2022-01-25T10:40:16.892804566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=kubelet,mems_allowed=0,global_oom,task_memcg=/system/runtime,task=udevd,pid=1620,uid=0 10.1.1.2: kern: err: [2022-01-25T10:40:16.893620566Z]: Out of memory: Killed process 1620 (udevd) total-vm:1448kB, anon-rss:124kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:36kB oom_score_adj:0 10.1.1.2: kern: info: [2022-01-25T10:40:16.894449566Z]: oom_reaper: reaped process 1620 (udevd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 10.1.1.2: kern: warning: [2022-01-25T10:40:41.309111566Z]: init invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0 10.1.1.2: kern: warning: [2022-01-25T10:40:41.309735566Z]: CPU: 0 PID: 636 Comm: init Tainted: G T 5.15.6-talos #1 10.1.1.2: kern: warning: [2022-01-25T10:40:41.310230566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 10.1.1.2: kern: warning: [2022-01-25T10:40:41.310933566Z]: Call Trace: 10.1.1.2: kern: warning: [2022-01-25T10:40:41.311131566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:40:41.311321566Z]: dump_stack_lvl+0x46/0x5a 10.1.1.2: kern: warning: [2022-01-25T10:40:41.311595566Z]: dump_header+0x45/0x1ee 10.1.1.2: kern: warning: [2022-01-25T10:40:41.311849566Z]: oom_kill_process.cold+0xb/0x10 10.1.1.2: kern: warning: [2022-01-25T10:40:41.312138566Z]: out_of_memory+0x22f/0x4e0 10.1.1.2: kern: warning: [2022-01-25T10:40:41.312414566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90 10.1.1.2: kern: warning: [2022-01-25T10:40:41.312772566Z]: __alloc_pages+0x307/0x320 10.1.1.2: kern: warning: [2022-01-25T10:40:41.313083566Z]: pagecache_get_page+0x120/0x3c0 10.1.1.2: kern: warning: [2022-01-25T10:40:41.313374566Z]: filemap_fault+0x5d3/0x920 10.1.1.2: kern: warning: [2022-01-25T10:40:41.313697566Z]: ? filemap_map_pages+0x2b0/0x440 10.1.1.2: kern: warning: [2022-01-25T10:40:41.313989566Z]: __do_fault+0x2f/0x90 10.1.1.2: kern: warning: [2022-01-25T10:40:41.314239566Z]: __handle_mm_fault+0x664/0xc00 10.1.1.2: kern: warning: [2022-01-25T10:40:41.314527566Z]: handle_mm_fault+0xc7/0x290 10.1.1.2: kern: warning: [2022-01-25T10:40:41.314809566Z]: exc_page_fault+0x1de/0x780 10.1.1.2: kern: warning: [2022-01-25T10:40:41.315082566Z]: ? asm_exc_page_fault+0x8/0x30 10.1.1.2: kern: warning: [2022-01-25T10:40:41.315368566Z]: asm_exc_page_fault+0x1e/0x30 10.1.1.2: kern: warning: [2022-01-25T10:40:41.315650566Z]: RIP: 0033:0x466d5d 10.1.1.2: kern: warning: [2022-01-25T10:40:41.315878566Z]: Code: Unable to access opcode bytes at RIP 0x466d33. 10.1.1.2: kern: warning: [2022-01-25T10:40:41.316262566Z]: RSP: 002b:000000c000071f10 EFLAGS: 00010212 10.1.1.2: kern: warning: [2022-01-25T10:40:41.316602566Z]: RAX: 0000000000000000 RBX: 0000000000002710 RCX: 0000000000466d5d 10.1.1.2: kern: warning: [2022-01-25T10:40:41.317031566Z]: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000c000071f10 10.1.1.2: kern: warning: [2022-01-25T10:40:41.317509566Z]: RBP: 000000c000071f20 R08: 0000000000005b38 R09: 0000000004911458 10.1.1.2: kern: warning: [2022-01-25T10:40:41.317950566Z]: R10: 0000000000000000 R11: 0000000000000212 R12: 000000c000071950 10.1.1.2: kern: warning: [2022-01-25T10:40:41.318382566Z]: R13: 000000c00076e000 R14: 000000c0000004e0 R15: 00007f91dcfcd6dd 10.1.1.2: kern: warning: [2022-01-25T10:40:41.318839566Z]: 10.1.1.2: kern: warning: [2022-01-25T10:40:41.319032566Z]: Mem-Info: 10.1.1.2: kern: warning: [2022-01-25T10:40:41.319227566Z]: active_anon:12625 inactive_anon:470100 isolated_anon:0\x0a active_file:18 inactive_file:484 isolated_file:16\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:4964 slab_unreclaimable:4662\x0a mapped:51 shmem:12651 pagetables:1404 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3994 free_pcp:166 free_cma:0 10.1.1.2: kern: warning: [2022-01-25T10:40:41.321363566Z]: Node 0 active_anon:50500kB inactive_anon:1880400kB active_file:72kB inactive_file:2020kB unevictable:0kB isolated(anon):0kB isolated(file):64kB mapped:204kB dirty:0kB writeback:0kB shmem:50604kB writeback_tmp:0kB kernel_stack:3616kB pagetables:5616kB all_unreclaimable? yes 10.1.1.2: kern: warning: [2022-01-25T10:40:41.322793566Z]: Node 0 DMA free:7596kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7716kB active_file:0kB inactive_file:12kB unevictable:0kB writepending:0kBpresent:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:40:41.324215566Z]: lowmem_reserve[]: 0 1890 1890 1890 10.1.1.2: kern: warning: [2022-01-25T10:40:41.324507566Z]: Node 0 DMA32 free:7876kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50500kB inactive_anon:1872684kB active_file:388kB inactive_file:1980kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:684kB local_pcp:252kB free_cma:0kB 10.1.1.2: kern: warning: [2022-01-25T10:40:41.326035566Z]: lowmem_reserve[]: 0 0 0 0 10.1.1.2: kern: warning: [2022-01-25T10:40:41.327177566Z]: Node 0 DMA: 1*4kB (U) 3*8kB (UM) 1*16kB (U) 0*32kB 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7596kB 10.1.1.2: kern: warning: [2022-01-25T10:40:41.327978566Z]: Node 0 DMA32: 105*4kB (UE) 120*8kB (UME) 79*16kB (UME) 30*32kB (UME) 17*64kB (UME) 7*128kB (UME) 1*256kB (M) 2*512kB (UM) 1*1024kB (M) 0*2048kB 0*4096kB = 7892kB 10.1.1.2: kern: info: [2022-01-25T10:40:41.328878566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB 10.1.1.2: kern: warning: [2022-01-25T10:40:41.329400566Z]: 13313 total pagecache pages 10.1.1.2: kern: warning: [2022-01-25T10:40:41.329659566Z]: 0 pages in swap cache 10.1.1.2: kern: warning: [2022-01-25T10:40:41.329895566Z]: Swap cache stats: add 0, delete 0, find 0/0 10.1.1.2: kern: warning: [2022-01-25T10:40:41.330223566Z]: Free swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:40:41.330436566Z]: Total swap = 0kB 10.1.1.2: kern: warning: [2022-01-25T10:40:41.330655566Z]: 524156 pages RAM 10.1.1.2: kern: warning: [2022-01-25T10:40:41.330871566Z]: 0 pages HighMem/MovableOnly 10.1.1.2: kern: warning: [2022-01-25T10:40:41.331130566Z]: 21678 pages reserved 10.1.1.2: kern: info: [2022-01-25T10:40:41.331368566Z]: Tasks state (memory values in pages): 10.1.1.2: kern: info: [2022-01-25T10:40:41.331691566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name 10.1.1.2: kern: info: [2022-01-25T10:40:41.332238566Z]: [ 664] 0 664 188197 4105 208896 0 -999 containerd 10.1.1.2: kern: info: [2022-01-25T10:40:41.332779566Z]: [ 1121] 0 1121 189038 4902 217088 0 -500 containerd 10.1.1.2: kern: info: [2022-01-25T10:40:41.333324566Z]: [ 1148] 0 1148 177754 558 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:40:41.333895566Z]: [ 1150] 0 1150 177690 614 106496 0 -998 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:40:41.334466566Z]: [ 1192] 50 1192 193598 2894 258048 0 -998 apid 10.1.1.2: kern: info: [2022-01-25T10:40:41.334992566Z]: [ 1193] 51 1193 193598 2697 249856 0 -998 trustd 10.1.1.2: kern: info: [2022-01-25T10:40:41.335526566Z]: [ 1289] 0 1289 177754 624 106496 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:40:41.336097566Z]: [ 1309] 0 1309 483992 6661 516096 0 -999 kubelet 10.1.1.2: kern: info: [2022-01-25T10:40:41.336628566Z]: [ 1629] 0 1629 177690 499 98304 0 -499 containerd-shim 10.1.1.2: kern: info: [2022-01-25T10:40:41.337192566Z]: [ 1650] 60 1650 3228292 438737 3739648 0 -998 etcd 10.1.1.2: kern: info: [2022-01-25T10:40:41.337721566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init,mems_allowed=0,global_oom,task_memcg=/system/etcd,task=etcd,pid=1650,uid=60 10.1.1.2: kern: err: [2022-01-25T10:40:41.338530566Z]: Out of memory: Killed process 1650 (etcd) total-vm:12913168kB, anon-rss:1754948kB, file-rss:0kB, shmem-rss:0kB, UID:60 pgtables:3652kB oom_score_adj:-998 10.1.1.2: kern: info: [2022-01-25T10:40:41.427771566Z]: oom_reaper: reaped process 1650 (etcd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB 10.1.1.2: user: warning: [2022-01-25T10:40:41.876458566Z]: [talos] service[apid](Running): Health check failed: dial tcp 127.0.0.1:50000: i/o timeout 10.1.1.2: user: warning: [2022-01-25T10:40:42.044935566Z]: [talos] service[apid](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:40:42.070712566Z]: [talos] service[cri](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:40:42.081034566Z]: [talos] service[containerd](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded 10.1.1.2: user: warning: [2022-01-25T10:40:42.094682566Z]: [talos] service[udevd](Waiting): Error running Process(["/sbin/udevd" "--resolve-names=never"]), going to restart forever: signal: killed 10.1.1.2: user: warning: [2022-01-25T10:40:42.095537566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:40:42.124883566Z]: [talos] service[containerd](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:40:42.202658566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:40:42.500951566Z]: [talos] service[cri](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:40:43.264446566Z]: [talos] service[etcd](Waiting): Error running Containerd(etcd), going to restart forever: task "etcd" failed: exit code 137 10.1.1.2: daemon: info: [2022-01-25T10:40:47.122249566Z]: udevd[1704]: starting version 3.2.10 10.1.1.2: user: warning: [2022-01-25T10:40:47.123621566Z]: [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 1704 10.1.1.2: daemon: info: [2022-01-25T10:40:47.310820566Z]: udevd[1704]: starting eudev-3.2.10 10.1.1.2: user: warning: [2022-01-25T10:40:49.300690566Z]: [talos] service[etcd](Running): Started task etcd (PID 1731) for container etcd 10.1.1.2: user: warning: [2022-01-25T10:40:52.436744566Z]: [talos] service[kubelet](Running): Health check successful 10.1.1.2: user: warning: [2022-01-25T10:40:58.101161566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"} yonggan@Yonggan-Aspire-A5 ~/N/P/talos> talosctl health healthcheck error: rpc error: code = Unknown desc = error discovering nodes: Get "https://10.1.1.2:6443/api/v1/nodes": dial tcp 10.1.1.2:6443: connect: connection refused yonggan@Yonggan-Aspire-A5 ~/N/P/talos [1]>
Reinstalling the cluster now...
Bug Report
Description
I have rebooted my whole talos cluster and now it is broken and i can´t even make
talosctl health
. The only thing i can make istalosctl dmesg
Logs
This are the logs of my first server node:
10.1.1.2: kern: info: [2022-01-25T10:19:41.226131566Z]: netconsole: network logging started 10.1.1.2: kern: info: [2022-01-25T10:19:41.226625566Z]: rdma_rxe: loaded 10.1.1.2: kern: notice: [2022-01-25T10:19:41.226925566Z]: cfg80211: Loading compiled-in X.509 certificates for regulatory database 10.1.1.2: kern: notice: [2022-01-25T10:19:41.227856566Z]: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7' 10.1.1.2: kern: info: [2022-01-25T10:19:41.228262566Z]: ALSA device list: 10.1.1.2: kern: info: [2022-01-25T10:19:41.228481566Z]: No soundcards found. 10.1.1.2: kern: warning: [2022-01-25T10:19:41.228768566Z]: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 SUBSYSTEM=platform DEVICE=+platform:regulatory.0 10.1.1.2: kern: info: [2022-01-25T10:19:41.229295566Z]: cfg80211: failed to load regulatory.db 10.1.1.2: kern: debug: [2022-01-25T10:19:41.301831566Z]: ata2.01: NODEV after polling detection 10.1.1.2: kern: info: [2022-01-25T10:19:41.302034566Z]: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 10.1.1.2: kern: notice: [2022-01-25T10:19:41.303195566Z]: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 SUBSYSTEM=scsi DEVICE=+scsi:2:0:0:0 10.1.1.2: kern: info: [2022-01-25T10:19:41.304462566Z]: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray SUBSYSTEM=scsi DEVICE=+scsi:2:0:0:0 10.1.1.2: kern: info: [2022-01-25T10:19:41.304889566Z]: cdrom: Uniform CD-ROM driver Revision: 3.20 10.1.1.2: kern: debug: [2022-01-25T10:19:41.334030566Z]: sr 2:0:0:0: Attached scsi CD-ROM sr0 SUBSYSTEM=scsi DEVICE=+scsi:2:0:0:0 10.1.1.2: kern: notice: [2022-01-25T10:19:41.334185566Z]: sr 2:0:0:0: Attached scsi generic sg2 type 5 SUBSYSTEM=scsi DEVICE=+scsi:2:0:0:0 10.1.1.2: kern: info: [2022-01-25T10:19:41.334947566Z]: Freeing unused kernel image (initmem) memory: 2172K 10.1.1.2: kern: info: [2022-01-25T10:19:41.349539566Z]: Write protecting the kernel read-only data: 38912k 10.1.1.2: kern: info: [2022-01-25T10:19:41.350188566Z]: Freeing unused kernel image (text/rodata gap) memory: 2020K 10.1.1.2: kern: info: [2022-01-25T10:19:41.350755566Z]: Freeing unused kernel image (rodata/data gap) memory: 1344K 10.1.1.2: kern: info: [2022-01-25T10:19:41.352164566Z]: x86/mm: Checked W+X mappings: passed, no W+X pages found. 10.1.1.2: kern: info: [2022-01-25T10:19:41.352562566Z]: x86/mm: Checking user space page tables 10.1.1.2: kern: info: [2022-01-25T10:19:41.352955566Z]: x86/mm: Checked W+X mappings: passed, no W+X pages found. 10.1.1.2: kern: info: [2022-01-25T10:19:41.353339566Z]: Run /init as init process 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353597566Z]: with arguments: 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353597566Z]: /init 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353597566Z]: with environment: 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353598566Z]: HOME=/ 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353598566Z]: TERM=linux 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353598566Z]: BOOT_IMAGE=/boot/vmlinuz 10.1.1.2: kern: debug: [2022-01-25T10:19:41.353599566Z]: pti=on 10.1.1.2: kern: info: [2022-01-25T10:19:41.785981566Z]: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 10.1.1.2: kern: notice: [2022-01-25T10:19:43.917518566Z]: random: crng init done 10.1.1.2: user: warning: [2022-01-25T10:19:43.919233566Z]: [talos] [initramfs] booting Talos v0.14.0 10.1.1.2: user: warning: [2022-01-25T10:19:43.919584566Z]: [talos] [initramfs] mounting the rootfs 10.1.1.2: kern: info: [2022-01-25T10:19:43.919955566Z]: loop0: detected capacity change from 0 to 100152 10.1.1.2: user: warning: [2022-01-25T10:19:43.949969566Z]: [talos] [initramfs] entering the rootfs 10.1.1.2: user: warning: [2022-01-25T10:19:43.950312566Z]: [talos] [initramfs] moving mounts to the new rootfs 10.1.1.2: user: warning: [2022-01-25T10:19:43.951313566Z]: [talos] [initramfs] changing working directory into /root 10.1.1.2: user: warning: [2022-01-25T10:19:43.951733566Z]: [talos] [initramfs] moving /root to / 10.1.1.2: user: warning: [2022-01-25T10:19:43.952063566Z]: [talos] [initramfs] changing root directory 10.1.1.2: user: warning: [2022-01-25T10:19:43.952403566Z]: [talos] [initramfs] cleaning up initramfs 10.1.1.2: user: warning: [2022-01-25T10:19:43.952871566Z]: [talos] [initramfs] executing /sbin/init 10.1.1.2: user: warning: [2022-01-25T10:19:48.343313566Z]: [talos] task setupLogger (1/1): done, 120.58\xc2\xb5s 10.1.1.2: user: warning: [2022-01-25T10:19:48.343809566Z]: [talos] phase logger (1/7): done, 646.91\xc2\xb5s 10.1.1.2: user: warning: [2022-01-25T10:19:48.344211566Z]: [talos] phase systemRequirements (2/7): 7 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:48.344732566Z]: [talos] task dropCapabilities (7/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.350342566Z]: [talos] task enforceKSPPRequirements (1/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.355636566Z]: [talos] task setupSystemDirectory (2/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.356049566Z]: [talos] task setupSystemDirectory (2/7): done, 5.542661ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.356445566Z]: [talos] task mountBPFFS (3/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.358832566Z]: [talos] task mountCgroups (4/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.359216566Z]: [talos] task mountCgroups (4/7): done, 8.426052ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.360881566Z]: [talos] task mountPseudoFilesystems (5/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.361271566Z]: [talos] task setRLimit (6/7): starting 10.1.1.2: user: warning: [2022-01-25T10:19:48.361594566Z]: [talos] task dropCapabilities (7/7): done, 10.889283ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.365324566Z]: [talos] task mountPseudoFilesystems (5/7): done, 14.529444ms 10.1.1.2: kern: info: [2022-01-25T10:19:48.371381566Z]: 8021q: adding VLAN 0 to HW filter on device eth0 10.1.1.2: user: warning: [2022-01-25T10:19:48.372087566Z]: [talos] task mountBPFFS (3/7): done, 15.219064ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.372459566Z]: [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]} 10.1.1.2: user: warning: [2022-01-25T10:19:48.373353566Z]: [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["1.1.1.1", "8.8.8.8"]} 10.1.1.2: user: warning: [2022-01-25T10:19:48.375520566Z]: [talos] task setRLimit (6/7): done, 24.718537ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.376105566Z]: [talos] task enforceKSPPRequirements (1/7): done, 18.746795ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.376751566Z]: [talos] phase systemRequirements (2/7): done, 32.538899ms 10.1.1.2: user: warning: [2022-01-25T10:19:48.377316566Z]: [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on [::1]:53: read udp [::1]:33593->[::1]:53: read: connection refused"} 10.1.1.2: user: warning: [2022-01-25T10:19:48.380295566Z]: [talos] phase integrity (3/7): 1 tasks(s) 10.1.1.2: user: warning: [2022-01-25T10:19:48.380821566Z]: [talos] task writeIMAPolicy (1/1): starting 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381380566Z]: audit: type=1807 audit(1643105988.741:2): action=dont_measure fsmagic=0x9fa0 res=1 10.1.1.2: kern: info: [2022-01-25T10:19:48.381479566Z]: ima: policy update completed 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381982566Z]: audit: type=1807 audit(1643105988.741:3): action=dont_measure fsmagic=0x62656572 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381983566Z]: audit: type=1807 audit(1643105988.741:4): action=dont_measure fsmagic=0x64626720 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381984566Z]: audit: type=1807 audit(1643105988.741:5): action=dont_measure fsmagic=0x1021994 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381984566Z]: audit: type=1807 audit(1643105988.741:6): action=dont_measure fsmagic=0x1cd1 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381985566Z]: audit: type=1807 audit(1643105988.741:7): action=dont_measure fsmagic=0x42494e4d res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.381985566Z]: audit: type=1807 audit(1643105988.741:8): action=dont_measure fsmagic=0x73636673 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.385967566Z]: audit: type=1807 audit(1643105988.741:9): action=dont_measure fsmagic=0xf97cff8c res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.386529566Z]: audit: type=1807 audit(1643105988.741:10): action=dont_measure fsmagic=0x43415d53 res=1 10.1.1.2: kern: notice: [2022-01-25T10:19:48.387104566Z]: audit: type=1807 audit(1643105988.741:11): action=dont_measure fsmagic=0x27e0eb res=1 10.1.1.2: user: warning: [2022-01-25T10:19:48.389725566Z]: [talos] setting resolvers {"component": "controller-runtime", "controller": "network.ResolverSpecController", "resolvers": ["10.1.0.2"]} 10.1.1.2: user: warning: [2022-01-25T10:19:48.390618566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "network.RouteSpecController", "error": "1 error occurred:\x5cn\x5ct* error adding route: netlink receive: network is unreachable, message {Family:2 DstLength:0 SrcLength:0 Tos:0 Table:0 Protocol:3 Scope:0 Type:1 Flags:0 Attributes:{Dst: Src:10.1.1.2 Gateway:10.1.0.2 OutIface:4 Priority:1024 Table:254 Mark:0 Expires: Metrics: Multipath:[]}}\x5cn\x5cn"}
10.1.1.2: user: warning: [2022-01-25T10:19:48.393657566Z]: [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "talos-server-01", "domainname": "localdomain"}
10.1.1.2: user: warning: [2022-01-25T10:19:48.394990566Z]: [talos] setting hostname {"component": "controller-runtime", "controller": "network.HostnameSpecController", "hostname": "talos-server-01", "domainname": "localdomain"}
10.1.1.2: user: warning: [2022-01-25T10:19:48.396266566Z]: [talos] assigned address {"component": "controller-runtime", "controller": "network.AddressSpecController", "address": "10.1.1.2/16", "link": "eth0"}
10.1.1.2: user: warning: [2022-01-25T10:19:48.397522566Z]: [talos] task writeIMAPolicy (1/1): done, 16.713285ms
10.1.1.2: user: warning: [2022-01-25T10:19:48.398011566Z]: [talos] phase integrity (3/7): done, 17.718755ms
10.1.1.2: user: warning: [2022-01-25T10:19:48.398382566Z]: [talos] phase etc (4/7): 2 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:48.398713566Z]: [talos] task createOSReleaseFile (2/2): starting
10.1.1.2: user: warning: [2022-01-25T10:19:48.399079566Z]: [talos] task CreateSystemCgroups (1/2): starting
10.1.1.2: user: warning: [2022-01-25T10:19:48.399642566Z]: [talos] task createOSReleaseFile (2/2): done, 447.35\xc2\xb5s
10.1.1.2: user: warning: [2022-01-25T10:19:48.400047566Z]: [talos] task CreateSystemCgroups (1/2): done, 1.17619ms
10.1.1.2: user: warning: [2022-01-25T10:19:48.400441566Z]: [talos] phase etc (4/7): done, 2.058951ms
10.1.1.2: user: warning: [2022-01-25T10:19:48.400836566Z]: [talos] phase mountSystem (5/7): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:48.401210566Z]: [talos] task mountStatePartition (1/1): starting
10.1.1.2: kern: notice: [2022-01-25T10:19:48.535005566Z]: XFS (sda5): Mounting V5 Filesystem
10.1.1.2: user: warning: [2022-01-25T10:19:48.780302566Z]: [talos] created route {"component": "controller-runtime", "controller": "network.RouteSpecController", "destination": "default", "gateway": "10.1.0.2", "table": "main", "link": "eth0"}
10.1.1.2: user: warning: [2022-01-25T10:19:49.380786566Z]: [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on [::1]:53: read udp [::1]:35983->[::1]:53: read: connection refused"}
10.1.1.2: kern: notice: [2022-01-25T10:19:49.439936566Z]: XFS (sda5): Starting recovery (logdev: internal)
10.1.1.2: kern: notice: [2022-01-25T10:19:49.702449566Z]: XFS (sda5): Ending recovery (logdev: internal)
10.1.1.2: kern: warning: [2022-01-25T10:19:49.820829566Z]: xfs filesystem being mounted at /system/state supports timestamps until 2038 (0x7fffffff)
10.1.1.2: user: warning: [2022-01-25T10:19:49.821541566Z]: [talos] task mountStatePartition (1/1): done, 1.420328234s
10.1.1.2: user: warning: [2022-01-25T10:19:49.821985566Z]: [talos] phase mountSystem (5/7): done, 1.421149624s
10.1.1.2: user: warning: [2022-01-25T10:19:49.822347566Z]: [talos] phase config (6/7): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:49.822678566Z]: [talos] task loadConfig (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:49.891357566Z]: [talos] node identity established {"component": "controller-runtime", "controller": "cluster.NodeIdentityController", "node_id": "F8a3IfkpgzAalPsNeX1K7WEbN1CIMZm3kQprPFdoJfMB"}
10.1.1.2: user: warning: [2022-01-25T10:19:49.904900566Z]: [talos] task loadConfig (1/1): persistence is enabled, using existing config on disk
10.1.1.2: user: warning: [2022-01-25T10:19:49.921539566Z]: [talos] task loadConfig (1/1): done, 98.866337ms
10.1.1.2: user: warning: [2022-01-25T10:19:49.921992566Z]: [talos] phase config (6/7): done, 99.642948ms
10.1.1.2: user: warning: [2022-01-25T10:19:49.922403566Z]: [talos] phase unmountSystem (7/7): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:49.922819566Z]: [talos] task unmountStatePartition (1/1): starting
10.1.1.2: kern: notice: [2022-01-25T10:19:49.923372566Z]: XFS (sda5): Unmounting Filesystem
10.1.1.2: user: warning: [2022-01-25T10:19:50.308113566Z]: [talos] hello failed {"component": "controller-runtime", "controller": "cluster.DiscoveryServiceController", "error": "rpc error: code = Unavailable desc = connection error: desc = \x5c"transport: Error while dialing dial tcp: lookup discovery.talos.dev on [::1]:53: read udp [::1]:44232->[::1]:53: read: connection refused\x5c"", "endpoint": "discovery.talos.dev:443"}
10.1.1.2: user: warning: [2022-01-25T10:19:50.382329566Z]: [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on [::1]:53: read udp [::1]:58172->[::1]:53: read: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:19:50.555798566Z]: [talos] task unmountStatePartition (1/1): done, 632.975816ms
10.1.1.2: user: warning: [2022-01-25T10:19:50.556428566Z]: [talos] phase unmountSystem (7/7): done, 634.025626ms
10.1.1.2: user: warning: [2022-01-25T10:19:50.556886566Z]: [talos] initialize sequence: done: 2.213735565s
10.1.1.2: user: warning: [2022-01-25T10:19:50.557247566Z]: [talos] install sequence: 0 phase(s)
10.1.1.2: user: warning: [2022-01-25T10:19:50.557568566Z]: [talos] install sequence: done: 321.13\xc2\xb5s
10.1.1.2: user: warning: [2022-01-25T10:19:50.557914566Z]: [talos] boot sequence: 19 phase(s)
10.1.1.2: user: warning: [2022-01-25T10:19:50.558211566Z]: [talos] phase saveStateEncryptionConfig (1/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:50.558607566Z]: [talos] service[machined](Preparing): Running pre state
10.1.1.2: user: warning: [2022-01-25T10:19:50.559013566Z]: [talos] service[machined](Preparing): Creating service runner
10.1.1.2: user: warning: [2022-01-25T10:19:50.559445566Z]: [talos] service[machined](Running): Service started as goroutine
10.1.1.2: user: warning: [2022-01-25T10:19:50.559875566Z]: [talos] task SaveStateEncryptionConfig (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:50.560274566Z]: [talos] task SaveStateEncryptionConfig (1/1): done, 1.66384ms
10.1.1.2: user: warning: [2022-01-25T10:19:50.560707566Z]: [talos] phase saveStateEncryptionConfig (1/19): done, 2.495381ms
10.1.1.2: user: warning: [2022-01-25T10:19:50.561118566Z]: [talos] phase mountState (2/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:50.561458566Z]: [talos] task mountStatePartition (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:51.337066566Z]: [talos] hello failed {"component": "controller-runtime", "controller": "cluster.DiscoveryServiceController", "error": "rpc error: code = Unavailable desc = connection error: desc = \x5c"transport: Error while dialing dial tcp: lookup discovery.talos.dev on [::1]:53: read udp [::1]:34925->[::1]:53: read: connection refused\x5c"", "endpoint": "discovery.talos.dev:443"}
10.1.1.2: user: warning: [2022-01-25T10:19:51.383969566Z]: [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on [::1]:53: read udp [::1]:55329->[::1]:53: read: connection refused"}
10.1.1.2: kern: notice: [2022-01-25T10:19:52.216962566Z]: XFS (sda5): Mounting V5 Filesystem
10.1.1.2: user: warning: [2022-01-25T10:19:52.386075566Z]: [talos] failed looking up "pool.ntp.org", ignored {"component": "controller-runtime", "controller": "time.SyncController", "error": "lookup pool.ntp.org on [::1]:53: read udp [::1]:57904->[::1]:53: read: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:19:52.695956566Z]: [talos] hello failed {"component": "controller-runtime", "controller": "cluster.DiscoveryServiceController", "error": "rpc error: code = Unavailable desc = connection error: desc = \x5c"transport: Error while dialing dial tcp: lookup discovery.talos.dev on [::1]:53: read udp [::1]:34925->[::1]:53: read: connection refused\x5c"", "endpoint": "discovery.talos.dev:443"}
10.1.1.2: kern: info: [2022-01-25T10:19:53.545146566Z]: XFS (sda5): Ending clean mount
10.1.1.2: kern: warning: [2022-01-25T10:19:53.560640566Z]: xfs filesystem being mounted at /system/state supports timestamps until 2038 (0x7fffffff)
10.1.1.2: user: warning: [2022-01-25T10:19:53.561320566Z]: [talos] task mountStatePartition (1/1): done, 2.999861713s
10.1.1.2: user: warning: [2022-01-25T10:19:53.561812566Z]: [talos] phase mountState (2/19): done, 3.000693373s
10.1.1.2: user: warning: [2022-01-25T10:19:53.562176566Z]: [talos] phase validateConfig (3/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:53.562541566Z]: [talos] task validateConfig (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:53.562905566Z]: [talos] task validateConfig (1/1): done, 376.69\xc2\xb5s
10.1.1.2: user: warning: [2022-01-25T10:19:53.563268566Z]: [talos] phase validateConfig (3/19): done, 1.093611ms
10.1.1.2: user: warning: [2022-01-25T10:19:53.563638566Z]: [talos] phase saveConfig (4/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:53.563974566Z]: [talos] task saveConfig (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:53.570887566Z]: [talos] task saveConfig (1/1): done, 6.909622ms
10.1.1.2: user: warning: [2022-01-25T10:19:53.571294566Z]: [talos] phase saveConfig (4/19): done, 7.654922ms
10.1.1.2: user: warning: [2022-01-25T10:19:53.571654566Z]: [talos] phase env (5/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:53.571975566Z]: [talos] task setUserEnvVars (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:53.572312566Z]: [talos] task setUserEnvVars (1/1): done, 346.45\xc2\xb5s
10.1.1.2: user: warning: [2022-01-25T10:19:53.572680566Z]: [talos] phase env (5/19): done, 1.02634ms
10.1.1.2: user: warning: [2022-01-25T10:19:53.572996566Z]: [talos] phase containerd (6/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:53.573331566Z]: [talos] task startContainerd (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:53.573700566Z]: [talos] service[containerd](Preparing): Running pre state
10.1.1.2: user: warning: [2022-01-25T10:19:53.574096566Z]: [talos] service[containerd](Preparing): Creating service runner
10.1.1.2: user: warning: [2022-01-25T10:19:53.632127566Z]: [talos] adjusting time (slew) by 22.96771ms via 162.159.200.1, state TIME_OK, status STA_PLL | STA_NANO {"component": "controller-runtime", "controller": "time.SyncController"}
10.1.1.2: user: warning: [2022-01-25T10:19:54.505931566Z]: [talos] service[containerd](Running): Process Process(["/bin/containerd" "--address" "/system/run/containerd/containerd.sock" "--state" "/system/run/containerd" "--root" "/system/var/lib/containerd"]) started with PID 664
10.1.1.2: user: warning: [2022-01-25T10:19:54.574536566Z]: [talos] service[containerd](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:19:54.575027566Z]: [talos] task startContainerd (1/1): done, 1.003044355s
10.1.1.2: user: warning: [2022-01-25T10:19:54.575415566Z]: [talos] phase containerd (6/19): done, 1.003764852s
10.1.1.2: user: warning: [2022-01-25T10:19:54.575781566Z]: [talos] phase ephemeral (7/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:54.576124566Z]: [talos] task mountEphemeralPartition (1/1): starting
10.1.1.2: kern: notice: [2022-01-25T10:19:54.810030566Z]: XFS (sda6): Mounting V5 Filesystem
10.1.1.2: kern: notice: [2022-01-25T10:19:56.099828566Z]: XFS (sda6): Starting recovery (logdev: internal)
10.1.1.2: kern: notice: [2022-01-25T10:19:57.184233566Z]: XFS (sda6): Ending recovery (logdev: internal)
10.1.1.2: kern: warning: [2022-01-25T10:19:57.755585566Z]: xfs filesystem being mounted at /var supports timestamps until 2038 (0x7fffffff)
10.1.1.2: user: warning: [2022-01-25T10:19:57.769863566Z]: [talos] task mountEphemeralPartition (1/1): done, 3.197767572s
10.1.1.2: user: warning: [2022-01-25T10:19:57.770298566Z]: [talos] phase ephemeral (7/19): done, 3.19854255s
10.1.1.2: user: warning: [2022-01-25T10:19:57.770654566Z]: [talos] phase verifyInstall (8/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:57.771019566Z]: [talos] task verifyInstallation (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:57.771384566Z]: [talos] task verifyInstallation (1/1): done, 374.525\xc2\xb5s
10.1.1.2: user: warning: [2022-01-25T10:19:57.771766566Z]: [talos] phase verifyInstall (8/19): done, 1.114205ms
10.1.1.2: user: warning: [2022-01-25T10:19:57.772176566Z]: [talos] phase var (9/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:57.772495566Z]: [talos] task setupVarDirectory (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:58.466419566Z]: [talos] task setupVarDirectory (1/1): done, 694.689434ms
10.1.1.2: user: warning: [2022-01-25T10:19:58.466835566Z]: [talos] phase var (9/19): done, 695.428202ms
10.1.1.2: user: warning: [2022-01-25T10:19:58.467170566Z]: [talos] phase overlay (10/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:58.467515566Z]: [talos] task mountOverlayFilesystems (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:58.916696566Z]: [talos] task mountOverlayFilesystems (1/1): done, 449.661764ms
10.1.1.2: user: warning: [2022-01-25T10:19:58.917142566Z]: [talos] phase overlay (10/19): done, 450.451337ms
10.1.1.2: user: warning: [2022-01-25T10:19:58.917500566Z]: [talos] phase udevSetup (11/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:58.917844566Z]: [talos] task writeUdevRules (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:58.918193566Z]: [talos] task writeUdevRules (1/1): done, 361.275\xc2\xb5s
10.1.1.2: user: warning: [2022-01-25T10:19:58.918580566Z]: [talos] phase udevSetup (11/19): done, 1.082414ms
10.1.1.2: user: warning: [2022-01-25T10:19:58.918944566Z]: [talos] phase udevd (12/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:19:58.919262566Z]: [talos] task startUdevd (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:19:58.919592566Z]: [talos] service[udevd](Preparing): Running pre state
10.1.1.2: user: warning: [2022-01-25T10:19:59.121342566Z]: [talos] service[udevd](Preparing): Creating service runner
10.1.1.2: daemon: info: [2022-01-25T10:19:59.129229566Z]: udevd[684]: starting version 3.2.10
10.1.1.2: user: warning: [2022-01-25T10:19:59.129814566Z]: [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 684
10.1.1.2: daemon: info: [2022-01-25T10:19:59.140003566Z]: udevd[684]: starting eudev-3.2.10
10.1.1.2: user: warning: [2022-01-25T10:19:59.144172566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:19:59.151084566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:19:59.890780566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:00.059311566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:00.511734566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:01.192809566Z]: [talos] service[udevd](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:20:01.193274566Z]: [talos] task startUdevd (1/1): done, 2.276244216s
10.1.1.2: user: warning: [2022-01-25T10:20:01.193642566Z]: [talos] phase udevd (12/19): done, 2.27692855s
10.1.1.2: user: warning: [2022-01-25T10:20:01.193983566Z]: [talos] phase userDisks (13/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:20:01.194334566Z]: [talos] task mountUserDisks (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:20:01.212638566Z]: [talos] task mountUserDisks (1/1): skipping setup of "/dev/sdb", found existing partitions
10.1.1.2: kern: notice: [2022-01-25T10:20:01.242360566Z]: XFS (sdb1): Mounting V5 Filesystem
10.1.1.2: kern: info: [2022-01-25T10:20:01.509464566Z]: XFS (sdb1): Ending clean mount
10.1.1.2: kern: warning: [2022-01-25T10:20:01.514186566Z]: xfs filesystem being mounted at /var/mnt/sdb supports timestamps until 2038 (0x7fffffff)
10.1.1.2: user: warning: [2022-01-25T10:20:01.514812566Z]: [talos] task mountUserDisks (1/1): done, 320.771294ms
10.1.1.2: user: warning: [2022-01-25T10:20:01.515246566Z]: [talos] phase userDisks (13/19): done, 321.5554ms
10.1.1.2: user: warning: [2022-01-25T10:20:01.515640566Z]: [talos] phase userSetup (14/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:20:01.516049566Z]: [talos] task writeUserFiles (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:20:01.546595566Z]: [talos] task writeUserFiles (1/1): done, 30.57646ms
10.1.1.2: user: warning: [2022-01-25T10:20:01.547127566Z]: [talos] phase userSetup (14/19): done, 31.515748ms
10.1.1.2: user: warning: [2022-01-25T10:20:01.547634566Z]: [talos] phase lvm (15/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:20:01.548086566Z]: [talos] task activateLogicalVolumes (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:20:01.716876566Z]: [talos] task activateLogicalVolumes (1/1): done, 168.942358ms
10.1.1.2: user: warning: [2022-01-25T10:20:01.717336566Z]: [talos] phase lvm (15/19): done, 169.854095ms
10.1.1.2: user: warning: [2022-01-25T10:20:01.717673566Z]: [talos] phase startEverything (16/19): 1 tasks(s)
10.1.1.2: user: warning: [2022-01-25T10:20:01.718050566Z]: [talos] task startAllServices (1/1): starting
10.1.1.2: user: warning: [2022-01-25T10:20:01.718408566Z]: [talos] task startAllServices (1/1): waiting for 7 services
10.1.1.2: user: warning: [2022-01-25T10:20:01.718807566Z]: [talos] service[apid](Waiting): Waiting for service "containerd" to be "up", api certificates
10.1.1.2: user: warning: [2022-01-25T10:20:01.719407566Z]: [talos] service[etcd](Waiting): Waiting for service "cri" to be "up", time sync, network
10.1.1.2: user: warning: [2022-01-25T10:20:01.719989566Z]: [talos] service[cri](Waiting): Waiting for network
10.1.1.2: user: warning: [2022-01-25T10:20:01.720885566Z]: [talos] service[cri](Preparing): Running pre state
10.1.1.2: user: warning: [2022-01-25T10:20:01.721268566Z]: [talos] service[trustd](Waiting): Waiting for service "containerd" to be "up", time sync, network
10.1.1.2: user: warning: [2022-01-25T10:20:01.721855566Z]: [talos] service[apid](Preparing): Running pre state
10.1.1.2: user: warning: [2022-01-25T10:20:01.722219566Z]: [talos] service[cri](Preparing): Creating service runner
10.1.1.2: user: warning: [2022-01-25T10:20:01.722612566Z]: [talos] service[trustd](Preparing): Running pre state
10.1.1.2: user: warning: [2022-01-25T10:20:01.723029566Z]: [talos] service[trustd](Preparing): Creating service runner
10.1.1.2: user: warning: [2022-01-25T10:20:01.725309566Z]: [talos] service[apid](Preparing): Creating service runner
10.1.1.2: user: warning: [2022-01-25T10:20:01.725964566Z]: [talos] service[cri](Running): Process Process(["/bin/containerd" "--address" "/run/containerd/containerd.sock" "--config" "/etc/cri/containerd.toml"]) started with PID 1121
10.1.1.2: user: warning: [2022-01-25T10:20:01.785689566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:02.112296566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:02.719245566Z]: [talos] service[etcd](Waiting): Waiting for service "cri" to be "up"
10.1.1.2: user: warning: [2022-01-25T10:20:02.901476566Z]: [talos] service[apid](Running): Started task apid (PID 1192) for container apid
10.1.1.2: user: warning: [2022-01-25T10:20:02.903341566Z]: [talos] service[trustd](Running): Started task trustd (PID 1193) for container trustd
10.1.1.2: user: warning: [2022-01-25T10:20:03.223349566Z]: [talos] service[cri](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:20:03.381397566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:05.850058566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:06.580513566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:07.720379566Z]: [talos] service[cri](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:20:07.720873566Z]: [talos] service[etcd](Preparing): Running pre state
10.1.1.2: user: warning: [2022-01-25T10:20:07.721891566Z]: [talos] service[kubelet](Waiting): Waiting for service "cri" to be "up", time sync, network
10.1.1.2: user: warning: [2022-01-25T10:20:07.722514566Z]: [talos] service[kubelet](Preparing): Running pre state
10.1.1.2: user: warning: [2022-01-25T10:20:07.734080566Z]: [talos] service[apid](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:20:07.735672566Z]: [talos] service[trustd](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:20:10.607320566Z]: [talos] service[kubelet](Preparing): Creating service runner
10.1.1.2: user: warning: [2022-01-25T10:20:10.903706566Z]: [talos] service[etcd](Preparing): Creating service runner
10.1.1.2: user: warning: [2022-01-25T10:20:11.089496566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:14.685221566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:18.888629566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:18.984498566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:28.714982566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:29.332589566Z]: [talos] service[etcd](Running): Started task etcd (PID 1267) for container etcd
10.1.1.2: user: warning: [2022-01-25T10:20:33.348735566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:36.901840566Z]: [talos] service[etcd](Running): Health check failed: error building etcd client: context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:20:39.262402566Z]: [talos] service[kubelet](Running): Started task kubelet (PID 1309) for container kubelet
10.1.1.2: user: warning: [2022-01-25T10:20:48.338854566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: Get\x5c"https://127.0.0.1:10250/pods/?timeout=30s\x5c": dial tcp 127.0.0.1:10250: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:20:52.111003566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:21:04.011224566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: Get\x5c"https://127.0.0.1:10250/pods/?timeout=30s\x5c": dial tcp 127.0.0.1:10250: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:21:07.577131566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:21:19.810466566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: Get\x5c"https://127.0.0.1:10250/pods/?timeout=30s\x5c": dial tcp 127.0.0.1:10250: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:21:20.611041566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:21:35.483717566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: Get\x5c"https://127.0.0.1:10250/pods/?timeout=30s\x5c": dial tcp 127.0.0.1:10250: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:21:47.595484566Z]: [talos] service[kubelet](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:21:51.813290566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:21:53.879826566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:22:08.635309566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:22:11.475805566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:22:27.433074566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:22:28.848776566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:22:52.114245566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:23:05.518161566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:23:18.825525566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:23:19.775689566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:23:49.107280566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:23:50.667765566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:24:26.849208566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:24:28.882749566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:24:32.958532566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: kern: warning: [2022-01-25T10:25:13.781090566Z]: init invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
10.1.1.2: kern: warning: [2022-01-25T10:25:13.781665566Z]: CPU: 0 PID: 636 Comm: init Tainted: G T 5.15.6-talos #1
10.1.1.2: kern: warning: [2022-01-25T10:25:13.782113566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
10.1.1.2: kern: warning: [2022-01-25T10:25:13.782733566Z]: Call Trace:
10.1.1.2: kern: warning: [2022-01-25T10:25:13.782925566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:25:13.783096566Z]: dump_stack_lvl+0x46/0x5a
10.1.1.2: kern: warning: [2022-01-25T10:25:13.783335566Z]: dump_header+0x45/0x1ee
10.1.1.2: kern: warning: [2022-01-25T10:25:13.783562566Z]: oom_kill_process.cold+0xb/0x10
10.1.1.2: kern: warning: [2022-01-25T10:25:13.783820566Z]: out_of_memory+0x22f/0x4e0
10.1.1.2: kern: warning: [2022-01-25T10:25:13.784062566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90
10.1.1.2: kern: warning: [2022-01-25T10:25:13.784383566Z]: __alloc_pages+0x307/0x320
10.1.1.2: kern: warning: [2022-01-25T10:25:13.784652566Z]: pagecache_get_page+0x120/0x3c0
10.1.1.2: kern: warning: [2022-01-25T10:25:13.784910566Z]: filemap_fault+0x5d3/0x920
10.1.1.2: kern: warning: [2022-01-25T10:25:13.785155566Z]: ? filemap_map_pages+0x2b0/0x440
10.1.1.2: kern: warning: [2022-01-25T10:25:13.785417566Z]: __do_fault+0x2f/0x90
10.1.1.2: kern: warning: [2022-01-25T10:25:13.785637566Z]: __handle_mm_fault+0x664/0xc00
10.1.1.2: kern: warning: [2022-01-25T10:25:13.785891566Z]: handle_mm_fault+0xc7/0x290
10.1.1.2: kern: warning: [2022-01-25T10:25:13.786139566Z]: exc_page_fault+0x1de/0x780
10.1.1.2: kern: warning: [2022-01-25T10:25:13.786381566Z]: ? asm_exc_page_fault+0x8/0x30
10.1.1.2: kern: warning: [2022-01-25T10:25:13.786635566Z]: asm_exc_page_fault+0x1e/0x30
10.1.1.2: kern: warning: [2022-01-25T10:25:13.786885566Z]: RIP: 0033:0x466d5d
10.1.1.2: kern: warning: [2022-01-25T10:25:13.787108566Z]: Code: Unable to access opcode bytes at RIP 0x466d33.
10.1.1.2: kern: warning: [2022-01-25T10:25:13.787442566Z]: RSP: 002b:000000c000071f10 EFLAGS: 00010212
10.1.1.2: kern: warning: [2022-01-25T10:25:13.787744566Z]: RAX: 0000000000000000 RBX: 0000000000002710 RCX: 0000000000466d5d
10.1.1.2: kern: warning: [2022-01-25T10:25:13.788131566Z]: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000c000071f10
10.1.1.2: kern: warning: [2022-01-25T10:25:13.788517566Z]: RBP: 000000c000071f20 R08: 00000000000023a3 R09: 0000000000000000
10.1.1.2: kern: warning: [2022-01-25T10:25:13.788904566Z]: R10: 0000000000000000 R11: 0000000000000212 R12: 000000c000071950
10.1.1.2: kern: warning: [2022-01-25T10:25:13.789295566Z]: R13: 000000c00076e000 R14: 000000c0000004e0 R15: 00007f91dcfcd6dd
10.1.1.2: kern: warning: [2022-01-25T10:25:13.789682566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:25:13.789860566Z]: Mem-Info:
10.1.1.2: kern: warning: [2022-01-25T10:25:13.790034566Z]: active_anon:12625 inactive_anon:470303 isolated_anon:0\x0a active_file:0 inactive_file:293 isolated_file:2\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:5057 slab_unreclaimable:4624\x0a mapped:35 shmem:12651 pagetables:1405 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3220 free_pcp:891 free_cma:0
10.1.1.2: kern: warning: [2022-01-25T10:25:13.791966566Z]: Node 0 active_anon:50500kB inactive_anon:1881212kB active_file:0kB inactive_file:1172kB unevictable:0kB isolated(anon):0kB isolated(file):124kB mapped:140kB dirty:0kB writeback:0kB shmem:50604kB writeback_tmp:0kB kernel_stack:3568kB pagetables:5620kB all_unreclaimable? yes
10.1.1.2: kern: warning: [2022-01-25T10:25:13.793264566Z]: Node 0 DMA free:7592kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7744kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:25:13.794556566Z]: lowmem_reserve[]: 0 1890 1890 1890
10.1.1.2: kern: warning: [2022-01-25T10:25:13.794872566Z]: Node 0 DMA32 free:5288kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50500kB inactive_anon:1873468kB active_file:476kB inactive_file:1172kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:3564kB local_pcp:3496kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:25:13.796310566Z]: lowmem_reserve[]: 0 0 0 0
10.1.1.2: kern: warning: [2022-01-25T10:25:13.796543566Z]: Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7592kB
10.1.1.2: kern: warning: [2022-01-25T10:25:13.797261566Z]: Node 0 DMA32: 71*4kB (UME) 94*8kB (UE) 85*16kB (UE) 25*32kB (UME) 16*64kB (UME) 3*128kB (M) 1*256kB (M) 1*512kB (M) 0*1024kB 0*2048kB 0*4096kB = 5372kB
10.1.1.2: kern: info: [2022-01-25T10:25:13.798052566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
10.1.1.2: kern: warning: [2022-01-25T10:25:13.798530566Z]: 12965 total pagecache pages
10.1.1.2: kern: warning: [2022-01-25T10:25:13.798769566Z]: 0 pages in swap cache
10.1.1.2: kern: warning: [2022-01-25T10:25:13.798993566Z]: Swap cache stats: add 0, delete 0, find 0/0
10.1.1.2: kern: warning: [2022-01-25T10:25:13.799295566Z]: Free swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:25:13.799494566Z]: Total swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:25:13.799707566Z]: 524156 pages RAM
10.1.1.2: kern: warning: [2022-01-25T10:25:13.799905566Z]: 0 pages HighMem/MovableOnly
10.1.1.2: kern: warning: [2022-01-25T10:25:13.800161566Z]: 21678 pages reserved
10.1.1.2: kern: info: [2022-01-25T10:25:13.800374566Z]: Tasks state (memory values in pages):
10.1.1.2: kern: info: [2022-01-25T10:25:13.800660566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
10.1.1.2: kern: info: [2022-01-25T10:25:13.801153566Z]: [ 664] 0 664 188197 3989 208896 0 -999 containerd
10.1.1.2: kern: info: [2022-01-25T10:25:13.801642566Z]: [ 684] 0 684 368 38 45056 0 0 udevd
10.1.1.2: kern: info: [2022-01-25T10:25:13.802112566Z]: [ 1121] 0 1121 189038 5105 217088 0 -500 containerd
10.1.1.2: kern: info: [2022-01-25T10:25:13.802595566Z]: [ 1148] 0 1148 177754 450 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:25:13.803105566Z]: [ 1150] 0 1150 177690 498 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:25:13.803621566Z]: [ 1192] 50 1192 193598 3155 253952 0 -998 apid
10.1.1.2: kern: info: [2022-01-25T10:25:13.804099566Z]: [ 1193] 51 1193 193598 2717 249856 0 -998 trustd
10.1.1.2: kern: info: [2022-01-25T10:25:13.804569566Z]: [ 1246] 0 1246 177754 483 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:25:13.805084566Z]: [ 1267] 60 1267 3228276 439573 3747840 0 -998 etcd
10.1.1.2: kern: info: [2022-01-25T10:25:13.805545566Z]: [ 1289] 0 1289 177754 478 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:25:13.806051566Z]: [ 1309] 0 1309 447126 6055 483328 0 -999 kubelet
10.1.1.2: kern: info: [2022-01-25T10:25:13.806524566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init,mems_allowed=0,global_oom,task_memcg=/system/runtime,task=udevd,pid=684,uid=0
10.1.1.2: kern: err: [2022-01-25T10:25:13.807255566Z]: Out of memory: Killed process 684 (udevd) total-vm:1472kB, anon-rss:152kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:44kB oom_score_adj:0
10.1.1.2: kern: info: [2022-01-25T10:25:13.808013566Z]: oom_reaper: reaped process 684 (udevd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
10.1.1.2: kern: warning: [2022-01-25T10:25:23.119311566Z]: init invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
10.1.1.2: kern: warning: [2022-01-25T10:25:23.119917566Z]: CPU: 1 PID: 636 Comm: init Tainted: G T 5.15.6-talos #1
10.1.1.2: kern: warning: [2022-01-25T10:25:23.120396566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
10.1.1.2: kern: warning: [2022-01-25T10:25:23.121069566Z]: Call Trace:
10.1.1.2: kern: warning: [2022-01-25T10:25:23.121264566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:25:23.121440566Z]: dump_stack_lvl+0x46/0x5a
10.1.1.2: kern: warning: [2022-01-25T10:25:23.121696566Z]: dump_header+0x45/0x1ee
10.1.1.2: kern: warning: [2022-01-25T10:25:23.121941566Z]: oom_kill_process.cold+0xb/0x10
10.1.1.2: kern: warning: [2022-01-25T10:25:23.122215566Z]: out_of_memory+0x22f/0x4e0
10.1.1.2: kern: warning: [2022-01-25T10:25:23.122474566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90
10.1.1.2: kern: warning: [2022-01-25T10:25:23.122818566Z]: __alloc_pages+0x307/0x320
10.1.1.2: kern: warning: [2022-01-25T10:25:23.123082566Z]: pagecache_get_page+0x120/0x3c0
10.1.1.2: kern: warning: [2022-01-25T10:25:23.123358566Z]: filemap_fault+0x5d3/0x920
10.1.1.2: kern: warning: [2022-01-25T10:25:23.123637566Z]: ? filemap_map_pages+0x2b0/0x440
10.1.1.2: kern: warning: [2022-01-25T10:25:23.123923566Z]: __do_fault+0x2f/0x90
10.1.1.2: kern: warning: [2022-01-25T10:25:23.124160566Z]: __handle_mm_fault+0x664/0xc00
10.1.1.2: kern: warning: [2022-01-25T10:25:23.124430566Z]: handle_mm_fault+0xc7/0x290
10.1.1.2: kern: warning: [2022-01-25T10:25:23.124710566Z]: exc_page_fault+0x1de/0x780
10.1.1.2: kern: warning: [2022-01-25T10:25:23.124974566Z]: ? asm_exc_page_fault+0x8/0x30
10.1.1.2: kern: warning: [2022-01-25T10:25:23.126077566Z]: asm_exc_page_fault+0x1e/0x30
10.1.1.2: kern: warning: [2022-01-25T10:25:23.126348566Z]: RIP: 0033:0x466d5d
10.1.1.2: kern: warning: [2022-01-25T10:25:23.126575566Z]: Code: Unable to access opcode bytes at RIP 0x466d33.
10.1.1.2: kern: warning: [2022-01-25T10:25:23.126944566Z]: RSP: 002b:000000c000071f10 EFLAGS: 00010212
10.1.1.2: kern: warning: [2022-01-25T10:25:23.127268566Z]: RAX: 0000000000000000 RBX: 0000000000002710 RCX: 0000000000466d5d
10.1.1.2: kern: warning: [2022-01-25T10:25:23.127694566Z]: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000c000071f10
10.1.1.2: kern: warning: [2022-01-25T10:25:23.128111566Z]: RBP: 000000c000071f20 R08: 00000000000023a9 R09: 0000000004911458
10.1.1.2: kern: warning: [2022-01-25T10:25:23.128528566Z]: R10: 0000000000000000 R11: 0000000000000212 R12: 000000c000071950
10.1.1.2: kern: warning: [2022-01-25T10:25:23.128955566Z]: R13: 000000c00076e000 R14: 000000c0000004e0 R15: 00007f91dcfcd6dd
10.1.1.2: kern: warning: [2022-01-25T10:25:23.129373566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:25:23.129564566Z]: Mem-Info:
10.1.1.2: kern: warning: [2022-01-25T10:25:23.129748566Z]: active_anon:12624 inactive_anon:470266 isolated_anon:0\x0a active_file:11 inactive_file:233 isolated_file:0\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:5056 slab_unreclaimable:4615\x0a mapped:22 shmem:12651 pagetables:1397 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3252 free_pcp:924 free_cma:0
10.1.1.2: kern: warning: [2022-01-25T10:25:23.131765566Z]: Node 0 active_anon:50496kB inactive_anon:1881064kB active_file:44kB inactive_file:808kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:88kB dirty:0kB writeback:0kB shmem:50604kB writeback_tmp:0kB kernel_stack:3568kB pagetables:5588kB all_unreclaimable? no
10.1.1.2: kern: warning: [2022-01-25T10:25:23.133155566Z]: Node 0 DMA free:7592kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7744kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:25:23.134561566Z]: lowmem_reserve[]: 0 1890 1890 1890
10.1.1.2: kern: warning: [2022-01-25T10:25:23.134852566Z]: Node 0 DMA32 free:5416kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50496kB inactive_anon:1873320kB active_file:184kB inactive_file:1288kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:3848kB local_pcp:3576kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:25:23.136365566Z]: lowmem_reserve[]: 0 0 0 0
10.1.1.2: kern: warning: [2022-01-25T10:25:23.136624566Z]: Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7592kB
10.1.1.2: kern: warning: [2022-01-25T10:25:23.137578566Z]: Node 0 DMA32: 67*4kB (UME) 108*8kB (UME) 75*16kB (UME) 29*32kB (UME) 20*64kB (UME) 3*128kB (M) 1*256kB (M) 1*512kB (M) 0*1024kB 0*2048kB 0*4096kB = 5692kB
10.1.1.2: kern: info: [2022-01-25T10:25:23.138740566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
10.1.1.2: kern: warning: [2022-01-25T10:25:23.139397566Z]: 12887 total pagecache pages
10.1.1.2: kern: warning: [2022-01-25T10:25:23.139726566Z]: 0 pages in swap cache
10.1.1.2: kern: warning: [2022-01-25T10:25:23.140024566Z]: Swap cache stats: add 0, delete 0, find 0/0
10.1.1.2: kern: warning: [2022-01-25T10:25:23.140446566Z]: Free swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:25:23.140736566Z]: Total swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:25:23.141014566Z]: 524156 pages RAM
10.1.1.2: kern: warning: [2022-01-25T10:25:23.141294566Z]: 0 pages HighMem/MovableOnly
10.1.1.2: kern: warning: [2022-01-25T10:25:23.141628566Z]: 21678 pages reserved
10.1.1.2: kern: info: [2022-01-25T10:25:23.141927566Z]: Tasks state (memory values in pages):
10.1.1.2: kern: info: [2022-01-25T10:25:23.142319566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
10.1.1.2: kern: info: [2022-01-25T10:25:23.143025566Z]: [ 664] 0 664 188197 3989 208896 0 -999 containerd
10.1.1.2: kern: info: [2022-01-25T10:25:23.143710566Z]: [ 1121] 0 1121 189038 5105 217088 0 -500 containerd
10.1.1.2: kern: info: [2022-01-25T10:25:23.144396566Z]: [ 1148] 0 1148 177754 450 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:25:23.145116566Z]: [ 1150] 0 1150 177690 498 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:25:23.145829566Z]: [ 1192] 50 1192 193598 3155 253952 0 -998 apid
10.1.1.2: kern: info: [2022-01-25T10:25:23.146472566Z]: [ 1193] 51 1193 193598 2718 249856 0 -998 trustd
10.1.1.2: kern: info: [2022-01-25T10:25:23.147138566Z]: [ 1246] 0 1246 177754 483 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:25:23.147840566Z]: [ 1267] 60 1267 3228276 439573 3747840 0 -998 etcd
10.1.1.2: kern: info: [2022-01-25T10:25:23.148495566Z]: [ 1289] 0 1289 177754 478 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:25:23.149203566Z]: [ 1309] 0 1309 447126 6055 483328 0 -999 kubelet
10.1.1.2: kern: info: [2022-01-25T10:25:23.149871566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init,mems_allowed=0,global_oom,task_memcg=/system/etcd,task=etcd,pid=1267,uid=60
10.1.1.2: kern: err: [2022-01-25T10:25:23.150871566Z]: Out of memory: Killed process 1267 (etcd) total-vm:12913104kB, anon-rss:1758292kB, file-rss:0kB, shmem-rss:0kB, UID:60 pgtables:3660kB oom_score_adj:-998
10.1.1.2: kern: info: [2022-01-25T10:25:23.213190566Z]: oom_reaper: reaped process 1267 (etcd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
10.1.1.2: user: warning: [2022-01-25T10:25:23.842676566Z]: [talos] service[kubelet](Running): Health check failed: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:25:23.883680566Z]: [talos] service[apid](Running): Health check failed: dial tcp 127.0.0.1:50000: i/o timeout
10.1.1.2: user: warning: [2022-01-25T10:25:23.955885566Z]: [talos] service[apid](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:25:23.967203566Z]: [talos] service[udevd](Waiting): Error running Process(["/sbin/udevd" "--resolve-names=never"]), going to restart forever: signal: killed
10.1.1.2: user: warning: [2022-01-25T10:25:23.980306566Z]: [talos] service[trustd](Running): Health check failed: dial tcp 127.0.0.1:50001: i/o timeout
10.1.1.2: user: warning: [2022-01-25T10:25:23.992462566Z]: [talos] service[trustd](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:25:24.004670566Z]: [talos] service[cri](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:25:24.007288566Z]: [talos] service[containerd](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:25:24.038872566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:25:24.040517566Z]: [talos] service[containerd](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:25:24.057104566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:25:24.291558566Z]: [talos] service[cri](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:25:25.181827566Z]: [talos] service[etcd](Waiting): Error running Containerd(etcd), going to restart forever: task "etcd" failed: exit code 137
10.1.1.2: daemon: info: [2022-01-25T10:25:28.996296566Z]: udevd[1448]: starting version 3.2.10
10.1.1.2: user: warning: [2022-01-25T10:25:28.996759566Z]: [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 1448
10.1.1.2: daemon: info: [2022-01-25T10:25:29.293856566Z]: udevd[1448]: starting eudev-3.2.10
10.1.1.2: user: warning: [2022-01-25T10:25:31.949339566Z]: [talos] service[etcd](Running): Started task etcd (PID 1476) for container etcd
10.1.1.2: user: warning: [2022-01-25T10:25:32.604380566Z]: [talos] service[kubelet](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:25:32.681189566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:26:02.439205566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:26:27.647951566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:26:34.668349566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:26:56.415793566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:27:30.979187566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:27:34.732979566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:27:52.871322566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:28:25.311179566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:28:25.736054566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:28:51.958061566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:29:24.306740566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: kern: warning: [2022-01-25T10:30:46.435024566Z]: containerd-shim invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-499
10.1.1.2: kern: warning: [2022-01-25T10:30:46.435696566Z]: CPU: 1 PID: 1292 Comm: containerd-shim Tainted: G T 5.15.6-talos #1
10.1.1.2: kern: warning: [2022-01-25T10:30:46.436231566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
10.1.1.2: kern: warning: [2022-01-25T10:30:46.436899566Z]: Call Trace:
10.1.1.2: kern: warning: [2022-01-25T10:30:46.437093566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:30:46.437274566Z]: dump_stack_lvl+0x46/0x5a
10.1.1.2: kern: warning: [2022-01-25T10:30:46.437535566Z]: dump_header+0x45/0x1ee
10.1.1.2: kern: warning: [2022-01-25T10:30:46.437792566Z]: oom_kill_process.cold+0xb/0x10
10.1.1.2: kern: warning: [2022-01-25T10:30:46.438073566Z]: out_of_memory+0x22f/0x4e0
10.1.1.2: kern: warning: [2022-01-25T10:30:46.438335566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90
10.1.1.2: kern: warning: [2022-01-25T10:30:46.438680566Z]: __alloc_pages+0x307/0x320
10.1.1.2: kern: warning: [2022-01-25T10:30:46.439783566Z]: pagecache_get_page+0x120/0x3c0
10.1.1.2: kern: warning: [2022-01-25T10:30:46.440063566Z]: filemap_fault+0x5d3/0x920
10.1.1.2: kern: warning: [2022-01-25T10:30:46.440319566Z]: ? filemap_map_pages+0x2b0/0x440
10.1.1.2: kern: warning: [2022-01-25T10:30:46.440606566Z]: __do_fault+0x2f/0x90
10.1.1.2: kern: warning: [2022-01-25T10:30:46.440845566Z]: __handle_mm_fault+0x664/0xc00
10.1.1.2: kern: warning: [2022-01-25T10:30:46.441120566Z]: handle_mm_fault+0xc7/0x290
10.1.1.2: kern: warning: [2022-01-25T10:30:46.441380566Z]: exc_page_fault+0x1de/0x780
10.1.1.2: kern: warning: [2022-01-25T10:30:46.441653566Z]: ? asm_exc_page_fault+0x8/0x30
10.1.1.2: kern: warning: [2022-01-25T10:30:46.441928566Z]: asm_exc_page_fault+0x1e/0x30
10.1.1.2: kern: warning: [2022-01-25T10:30:46.442197566Z]: RIP: 0033:0x45103c
10.1.1.2: kern: warning: [2022-01-25T10:30:46.442424566Z]: Code: Unable to access opcode bytes at RIP 0x451012.
10.1.1.2: kern: warning: [2022-01-25T10:30:46.442787566Z]: RSP: 002b:000000c00004f7a0 EFLAGS: 00010202
10.1.1.2: kern: warning: [2022-01-25T10:30:46.443116566Z]: RAX: 0000000000a20c9c RBX: 00000000000607c4 RCX: 00000000000607c4
10.1.1.2: kern: warning: [2022-01-25T10:30:46.443539566Z]: RDX: 00000000000607c4 RSI: 000000c00004f7f4 RDI: 000000c00004f800
10.1.1.2: kern: warning: [2022-01-25T10:30:46.443957566Z]: RBP: 000000c00004f7b0 R08: 0000000000000001 R09: 0000000000a20c9c
10.1.1.2: kern: warning: [2022-01-25T10:30:46.444375566Z]: R10: 00000000005c9780 R11: 0000000000073f5c R12: 00000000005c9780
10.1.1.2: kern: warning: [2022-01-25T10:30:46.444793566Z]: R13: 00000000000607c4 R14: 000000c0000009c0 R15: 0000000000000000
10.1.1.2: kern: warning: [2022-01-25T10:30:46.445209566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:30:46.445426566Z]: Mem-Info:
10.1.1.2: kern: warning: [2022-01-25T10:30:46.445619566Z]: active_anon:12626 inactive_anon:469829 isolated_anon:0\x0a active_file:13 inactive_file:662 isolated_file:0\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:5005 slab_unreclaimable:4634\x0a mapped:30 shmem:12651 pagetables:1407 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3771 free_pcp:449 free_cma:0
10.1.1.2: kern: warning: [2022-01-25T10:30:46.447629566Z]: Node 0 active_anon:50504kB inactive_anon:1879316kB active_file:52kB inactive_file:2420kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:204kB dirty:0kB writeback:0kBshmem:50604kB writeback_tmp:0kB kernel_stack:3616kB pagetables:5628kB all_unreclaimable? no
10.1.1.2: kern: warning: [2022-01-25T10:30:46.449027566Z]: Node 0 DMA free:7600kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7736kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:30:46.450427566Z]: lowmem_reserve[]: 0 1890 1890 1890
10.1.1.2: kern: warning: [2022-01-25T10:30:46.450717566Z]: Node 0 DMA32 free:6980kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50504kB inactive_anon:1871580kB active_file:52kB inactive_file:2652kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:2296kB local_pcp:1628kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:30:46.452255566Z]: lowmem_reserve[]: 0 0 0 0
10.1.1.2: kern: warning: [2022-01-25T10:30:46.452516566Z]: Node 0 DMA: 0*4kB 2*8kB (UM) 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7600kB
10.1.1.2: kern: warning: [2022-01-25T10:30:46.453292566Z]: Node 0 DMA32: 149*4kB (UME) 131*8kB (UME) 40*16kB (UME) 4*32kB (ME) 8*64kB (UME) 4*128kB (UE) 2*256kB (UM) 4*512kB (UM) 1*1024kB (U) 0*2048kB 0*4096kB = 7020kB
10.1.1.2: kern: info: [2022-01-25T10:30:46.454184566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
10.1.1.2: kern: warning: [2022-01-25T10:30:46.454703566Z]: 13329 total pagecache pages
10.1.1.2: kern: warning: [2022-01-25T10:30:46.454969566Z]: 0 pages in swap cache
10.1.1.2: kern: warning: [2022-01-25T10:30:46.455207566Z]: Swap cache stats: add 0, delete 0, find 0/0
10.1.1.2: kern: warning: [2022-01-25T10:30:46.455533566Z]: Free swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:30:46.455747566Z]: Total swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:30:46.455964566Z]: 524156 pages RAM
10.1.1.2: kern: warning: [2022-01-25T10:30:46.456181566Z]: 0 pages HighMem/MovableOnly
10.1.1.2: kern: warning: [2022-01-25T10:30:46.456441566Z]: 21678 pages reserved
10.1.1.2: kern: info: [2022-01-25T10:30:46.456674566Z]: Tasks state (memory values in pages):
10.1.1.2: kern: info: [2022-01-25T10:30:46.456977566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
10.1.1.2: kern: info: [2022-01-25T10:30:46.457511566Z]: [ 664] 0 664 188197 4030 208896 0 -999 containerd
10.1.1.2: kern: info: [2022-01-25T10:30:46.458043566Z]: [ 1121] 0 1121 189038 4888 217088 0 -500 containerd
10.1.1.2: kern: info: [2022-01-25T10:30:46.458571566Z]: [ 1148] 0 1148 177754 573 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:30:46.459128566Z]: [ 1150] 0 1150 177690 593 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:30:46.459676566Z]: [ 1192] 50 1192 193598 3233 258048 0 -998 apid
10.1.1.2: kern: info: [2022-01-25T10:30:46.460183566Z]: [ 1193] 51 1193 193598 2727 249856 0 -998 trustd
10.1.1.2: kern: info: [2022-01-25T10:30:46.460693566Z]: [ 1289] 0 1289 177754 608 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:30:46.461243566Z]: [ 1309] 0 1309 465559 6442 499712 0 -999 kubelet
10.1.1.2: kern: info: [2022-01-25T10:30:46.461759566Z]: [ 1448] 0 1448 362 32 45056 0 0 udevd
10.1.1.2: kern: info: [2022-01-25T10:30:46.462282566Z]: [ 1456] 0 1456 177754 569 110592 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:30:46.462832566Z]: [ 1476] 60 1476 3228308 438392 3731456 0 -998 etcd
10.1.1.2: kern: info: [2022-01-25T10:30:46.463343566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=runtime,mems_allowed=0,global_oom,task_memcg=/system/runtime,task=udevd,pid=1448,uid=0
10.1.1.2: kern: err: [2022-01-25T10:30:46.464137566Z]: Out of memory: Killed process 1448 (udevd) total-vm:1448kB, anon-rss:124kB, file-rss:4kB, shmem-rss:0kB, UID:0 pgtables:44kB oom_score_adj:0
10.1.1.2: kern: info: [2022-01-25T10:30:46.464946566Z]: oom_reaper: reaped process 1448 (udevd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
10.1.1.2: kern: warning: [2022-01-25T10:31:35.859728566Z]: containerd-shim invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-499
10.1.1.2: kern: warning: [2022-01-25T10:31:35.860425566Z]: CPU: 1 PID: 1292 Comm: containerd-shim Tainted: G T 5.15.6-talos #1
10.1.1.2: kern: warning: [2022-01-25T10:31:35.860951566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
10.1.1.2: kern: warning: [2022-01-25T10:31:35.861633566Z]: Call Trace:
10.1.1.2: kern: warning: [2022-01-25T10:31:35.861831566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:31:35.862014566Z]: dump_stack_lvl+0x46/0x5a
10.1.1.2: kern: warning: [2022-01-25T10:31:35.862272566Z]: dump_header+0x45/0x1ee
10.1.1.2: kern: warning: [2022-01-25T10:31:35.862519566Z]: oom_kill_process.cold+0xb/0x10
10.1.1.2: kern: warning: [2022-01-25T10:31:35.862798566Z]: out_of_memory+0x22f/0x4e0
10.1.1.2: kern: warning: [2022-01-25T10:31:35.863067566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90
10.1.1.2: kern: warning: [2022-01-25T10:31:35.863415566Z]: __alloc_pages+0x307/0x320
10.1.1.2: kern: warning: [2022-01-25T10:31:35.863683566Z]: pagecache_get_page+0x120/0x3c0
10.1.1.2: kern: warning: [2022-01-25T10:31:35.863987566Z]: filemap_fault+0x5d3/0x920
10.1.1.2: kern: warning: [2022-01-25T10:31:35.864253566Z]: ? filemap_map_pages+0x2b0/0x440
10.1.1.2: kern: warning: [2022-01-25T10:31:35.864553566Z]: __do_fault+0x2f/0x90
10.1.1.2: kern: warning: [2022-01-25T10:31:35.864886566Z]: __handle_mm_fault+0x664/0xc00
10.1.1.2: kern: warning: [2022-01-25T10:31:35.865197566Z]: handle_mm_fault+0xc7/0x290
10.1.1.2: kern: warning: [2022-01-25T10:31:35.865460566Z]: exc_page_fault+0x1de/0x780
10.1.1.2: kern: warning: [2022-01-25T10:31:35.865725566Z]: ? asm_exc_page_fault+0x8/0x30
10.1.1.2: kern: warning: [2022-01-25T10:31:35.865999566Z]: asm_exc_page_fault+0x1e/0x30
10.1.1.2: kern: warning: [2022-01-25T10:31:35.866276566Z]: RIP: 0033:0x42f72f
10.1.1.2: kern: warning: [2022-01-25T10:31:35.866506566Z]: Code: Unable to access opcode bytes at RIP 0x42f705.
10.1.1.2: kern: warning: [2022-01-25T10:31:35.866868566Z]: RSP: 002b:000000c00004fe80 EFLAGS: 00010206
10.1.1.2: kern: warning: [2022-01-25T10:31:35.867201566Z]: RAX: ffffffffffffff92 RBX: 0000000000000000 RCX: 00000000004655a3
10.1.1.2: kern: warning: [2022-01-25T10:31:35.867620566Z]: RDX: 0000000000000000 RSI: 0000000000000080 RDI: 0000000000bce4f8
10.1.1.2: kern: warning: [2022-01-25T10:31:35.868039566Z]: RBP: 000000c00004fec0 R08: 0000000000000000 R09: 0000000000000000
10.1.1.2: kern: warning: [2022-01-25T10:31:35.868466566Z]: R10: 000000c00004feb0 R11: 0000000000000206 R12: 000000c00004feb0
10.1.1.2: kern: warning: [2022-01-25T10:31:35.868904566Z]: R13: 0000000000000077 R14: 000000c0000009c0 R15: 0000000000000000
10.1.1.2: kern: warning: [2022-01-25T10:31:35.869348566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:31:35.869540566Z]: Mem-Info:
10.1.1.2: kern: warning: [2022-01-25T10:31:35.869725566Z]: active_anon:12625 inactive_anon:469799 isolated_anon:0\x0a active_file:23 inactive_file:497 isolated_file:0\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:5003 slab_unreclaimable:4630\x0a mapped:29 shmem:12651 pagetables:1399 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3848 free_pcp:494 free_cma:0
10.1.1.2: kern: warning: [2022-01-25T10:31:35.871733566Z]: Node 0 active_anon:50500kB inactive_anon:1879196kB active_file:92kB inactive_file:1988kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:116kB dirty:0kB writeback:0kBshmem:50604kB writeback_tmp:0kB kernel_stack:3600kB pagetables:5596kB all_unreclaimable? yes
10.1.1.2: kern: warning: [2022-01-25T10:31:35.873143566Z]: Node 0 DMA free:7600kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7736kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:31:35.874568566Z]: lowmem_reserve[]: 0 1890 1890 1890
10.1.1.2: kern: warning: [2022-01-25T10:31:35.874869566Z]: Node 0 DMA32 free:7792kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50500kB inactive_anon:1871460kB active_file:352kB inactive_file:2068kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:1948kB local_pcp:1736kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:31:35.876417566Z]: lowmem_reserve[]: 0 0 0 0
10.1.1.2: kern: warning: [2022-01-25T10:31:35.876669566Z]: Node 0 DMA: 0*4kB 2*8kB (UM) 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7600kB
10.1.1.2: kern: warning: [2022-01-25T10:31:35.877466566Z]: Node 0 DMA32: 173*4kB (UME) 170*8kB (UME) 49*16kB (UME) 15*32kB (UME) 10*64kB (UME) 4*128kB (UE) 3*256kB (UM) 1*512kB (U) 2*1024kB (UM) 0*2048kB 0*4096kB = 7796kB
10.1.1.2: kern: info: [2022-01-25T10:31:35.878376566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
10.1.1.2: kern: warning: [2022-01-25T10:31:35.878897566Z]: 13220 total pagecache pages
10.1.1.2: kern: warning: [2022-01-25T10:31:35.879158566Z]: 0 pages in swap cache
10.1.1.2: kern: warning: [2022-01-25T10:31:35.879397566Z]: Swap cache stats: add 0, delete 0, find 0/0
10.1.1.2: kern: warning: [2022-01-25T10:31:35.879723566Z]: Free swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:31:35.879938566Z]: Total swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:31:35.880154566Z]: 524156 pages RAM
10.1.1.2: kern: warning: [2022-01-25T10:31:35.880371566Z]: 0 pages HighMem/MovableOnly
10.1.1.2: kern: warning: [2022-01-25T10:31:35.880629566Z]: 21678 pages reserved
10.1.1.2: kern: info: [2022-01-25T10:31:35.880861566Z]: Tasks state (memory values in pages):
10.1.1.2: kern: info: [2022-01-25T10:31:35.881166566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
10.1.1.2: kern: info: [2022-01-25T10:31:35.881715566Z]: [ 664] 0 664 188197 4030 208896 0 -999 containerd
10.1.1.2: kern: info: [2022-01-25T10:31:35.882249566Z]: [ 1121] 0 1121 189038 4888 217088 0 -500 containerd
10.1.1.2: kern: info: [2022-01-25T10:31:35.882781566Z]: [ 1148] 0 1148 177754 573 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:31:35.883334566Z]: [ 1150] 0 1150 177690 593 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:31:35.883882566Z]: [ 1192] 50 1192 193598 3233 258048 0 -998 apid
10.1.1.2: kern: info: [2022-01-25T10:31:35.884395566Z]: [ 1193] 51 1193 193598 2727 249856 0 -998 trustd
10.1.1.2: kern: info: [2022-01-25T10:31:35.884905566Z]: [ 1289] 0 1289 177754 632 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:31:35.885468566Z]: [ 1309] 0 1309 465559 6442 499712 0 -999 kubelet
10.1.1.2: kern: info: [2022-01-25T10:31:35.885978566Z]: [ 1456] 0 1456 177754 569 110592 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:31:35.886541566Z]: [ 1476] 60 1476 3228308 438392 3731456 0 -998 etcd
10.1.1.2: kern: info: [2022-01-25T10:31:35.887048566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=runtime,mems_allowed=0,global_oom,task_memcg=/system/etcd,task=etcd,pid=1476,uid=60
10.1.1.2: kern: err: [2022-01-25T10:31:35.888685566Z]: Out of memory: Killed process 1476 (etcd) total-vm:12913232kB, anon-rss:1753568kB, file-rss:0kB, shmem-rss:0kB, UID:60 pgtables:3644kB oom_score_adj:-998
10.1.1.2: kern: info: [2022-01-25T10:31:35.954476566Z]: oom_reaper: reaped process 1476 (etcd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
10.1.1.2: user: warning: [2022-01-25T10:31:36.676600566Z]: [talos] service[trustd](Running): Health check failed: dial tcp 127.0.0.1:50001: i/o timeout
10.1.1.2: user: warning: [2022-01-25T10:31:36.677247566Z]: [talos] service[trustd](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:31:36.688420566Z]: [talos] service[kubelet](Running): Health check failed: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:31:36.706932566Z]: [talos] service[cri](Running): Health check failed: failed to dial "/run/containerd/containerd.sock": context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:31:36.725433566Z]: [talos] service[udevd](Running): Health check failed: context deadline exceeded:
10.1.1.2: user: warning: [2022-01-25T10:31:36.739829566Z]: [talos] service[apid](Running): Health check failed: dial tcp 127.0.0.1:50000: i/o timeout
10.1.1.2: user: warning: [2022-01-25T10:31:36.740458566Z]: [talos] service[apid](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:31:36.749218566Z]: [talos] service[udevd](Waiting): Error running Process(["/sbin/udevd" "--resolve-names=never"]), going to restart forever: signal: killed
10.1.1.2: user: warning: [2022-01-25T10:31:36.750036566Z]: [talos] service[containerd](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:31:36.773236566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:31:36.800354566Z]: [talos] service[containerd](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:31:36.806028566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:31:36.830555566Z]: [talos] service[cri](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:31:37.694471566Z]: [talos] service[etcd](Waiting): Error running Containerd(etcd), going to restart forever: task "etcd" failed: exit code 137
10.1.1.2: user: warning: [2022-01-25T10:31:38.246726566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:31:40.412445566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: daemon: info: [2022-01-25T10:31:41.763675566Z]: udevd[1533]: starting version 3.2.10
10.1.1.2: user: warning: [2022-01-25T10:31:41.774721566Z]: [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 1533
10.1.1.2: daemon: info: [2022-01-25T10:31:42.076580566Z]: udevd[1533]: starting eudev-3.2.10
10.1.1.2: user: warning: [2022-01-25T10:31:42.683083566Z]: [talos] service[kubelet](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:31:43.164015566Z]: [talos] service[etcd](Running): Started task etcd (PID 1564) for container etcd
10.1.1.2: user: warning: [2022-01-25T10:31:43.654230566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:31:52.990479566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:31:53.447922566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:32:08.415264566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:32:18.106948566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:32:48.211414566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:33:23.254507566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:33:29.194944566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:33:43.428729566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:34:14.259306566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:34:39.603876566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:34:47.060118566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:35:07.014053566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:35:32.493553566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:35:51.333793566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: kern: warning: [2022-01-25T10:35:59.092887566Z]: containerd invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-999
10.1.1.2: kern: warning: [2022-01-25T10:35:59.093562566Z]: CPU: 1 PID: 665 Comm: containerd Tainted: G T 5.15.6-talos #1
10.1.1.2: kern: warning: [2022-01-25T10:35:59.094093566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
10.1.1.2: kern: warning: [2022-01-25T10:35:59.094798566Z]: Call Trace:
10.1.1.2: kern: warning: [2022-01-25T10:35:59.095008566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:35:59.095186566Z]: dump_stack_lvl+0x46/0x5a
10.1.1.2: kern: warning: [2022-01-25T10:35:59.095470566Z]: dump_header+0x45/0x1ee
10.1.1.2: kern: warning: [2022-01-25T10:35:59.095720566Z]: oom_kill_process.cold+0xb/0x10
10.1.1.2: kern: warning: [2022-01-25T10:35:59.096004566Z]: out_of_memory+0x22f/0x4e0
10.1.1.2: kern: warning: [2022-01-25T10:35:59.096267566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90
10.1.1.2: kern: warning: [2022-01-25T10:35:59.096624566Z]: __alloc_pages+0x307/0x320
10.1.1.2: kern: warning: [2022-01-25T10:35:59.096884566Z]: pagecache_get_page+0x120/0x3c0
10.1.1.2: kern: warning: [2022-01-25T10:35:59.097162566Z]: filemap_fault+0x5d3/0x920
10.1.1.2: kern: warning: [2022-01-25T10:35:59.097435566Z]: ? filemap_map_pages+0x2b0/0x440
10.1.1.2: kern: warning: [2022-01-25T10:35:59.097741566Z]: __do_fault+0x2f/0x90
10.1.1.2: kern: warning: [2022-01-25T10:35:59.097982566Z]: __handle_mm_fault+0x664/0xc00
10.1.1.2: kern: warning: [2022-01-25T10:35:59.098255566Z]: handle_mm_fault+0xc7/0x290
10.1.1.2: kern: warning: [2022-01-25T10:35:59.098518566Z]: exc_page_fault+0x1de/0x780
10.1.1.2: kern: warning: [2022-01-25T10:35:59.098783566Z]: ? asm_exc_page_fault+0x8/0x30
10.1.1.2: kern: warning: [2022-01-25T10:35:59.099057566Z]: asm_exc_page_fault+0x1e/0x30
10.1.1.2: kern: warning: [2022-01-25T10:35:59.099342566Z]: RIP: 0033:0x56552ce31c9d
10.1.1.2: kern: warning: [2022-01-25T10:35:59.099609566Z]: Code: Unable to access opcode bytes at RIP 0x56552ce31c73.
10.1.1.2: kern: warning: [2022-01-25T10:35:59.100037566Z]: RSP: 002b:00007f9df0174988 EFLAGS: 00010212
10.1.1.2: kern: warning: [2022-01-25T10:35:59.100377566Z]: RAX: 0000000000000000 RBX: 0000000000002710 RCX: 000056552ce31c9d
10.1.1.2: kern: warning: [2022-01-25T10:35:59.100813566Z]: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007f9df0174988
10.1.1.2: kern: warning: [2022-01-25T10:35:59.101233566Z]: RBP: 00007f9df0174998 R08: 00000000000038fc R09: 0000000000000000
10.1.1.2: kern: warning: [2022-01-25T10:35:59.101651566Z]: R10: 0000000000000000 R11: 0000000000000212 R12: 00007f9df01743c8
10.1.1.2: kern: warning: [2022-01-25T10:35:59.102070566Z]: R13: 0000000000023000 R14: 000000c0000009c0 R15: 00007f9df0174b10
10.1.1.2: kern: warning: [2022-01-25T10:35:59.102491566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:35:59.102710566Z]: Mem-Info:
10.1.1.2: kern: warning: [2022-01-25T10:35:59.102900566Z]: active_anon:12626 inactive_anon:470459 isolated_anon:0\x0a active_file:0 inactive_file:313 isolated_file:0\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:4769 slab_unreclaimable:4654\x0a mapped:48 shmem:12651 pagetables:1412 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:4082 free_pcp:42 free_cma:0
10.1.1.2: kern: warning: [2022-01-25T10:35:59.104904566Z]: Node 0 active_anon:50504kB inactive_anon:1881836kB active_file:0kB inactive_file:1252kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:192kB dirty:0kB writeback:0kB shmem:50604kB writeback_tmp:0kB kernel_stack:3648kB pagetables:5648kB all_unreclaimable? no
10.1.1.2: kern: warning: [2022-01-25T10:35:59.106403566Z]: Node 0 DMA free:7588kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7744kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:35:59.107828566Z]: lowmem_reserve[]: 0 1890 1890 1890
10.1.1.2: kern: warning: [2022-01-25T10:35:59.108125566Z]: Node 0 DMA32 free:8108kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50504kB inactive_anon:1874092kB active_file:800kB inactive_file:1232kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:292kB local_pcp:0kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:35:59.109647566Z]: lowmem_reserve[]: 0 0 0 0
10.1.1.2: kern: warning: [2022-01-25T10:35:59.109899566Z]: Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7588kB
10.1.1.2: kern: warning: [2022-01-25T10:35:59.110671566Z]: Node 0 DMA32: 315*4kB (UME) 202*8kB (UME) 64*16kB (UME) 17*32kB (ME) 8*64kB (UME) 5*128kB (UME) 1*256kB (U) 2*512kB (UM) 1*1024kB (M) 0*2048kB 0*4096kB = 7900kB
10.1.1.2: kern: info: [2022-01-25T10:35:59.111934566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
10.1.1.2: kern: warning: [2022-01-25T10:35:59.112634566Z]: 13177 total pagecache pages
10.1.1.2: kern: warning: [2022-01-25T10:35:59.112995566Z]: 0 pages in swap cache
10.1.1.2: kern: warning: [2022-01-25T10:35:59.114473566Z]: Swap cache stats: add 0, delete 0, find 0/0
10.1.1.2: kern: warning: [2022-01-25T10:35:59.114808566Z]: Free swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:35:59.115027566Z]: Total swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:35:59.115243566Z]: 524156 pages RAM
10.1.1.2: kern: warning: [2022-01-25T10:35:59.115459566Z]: 0 pages HighMem/MovableOnly
10.1.1.2: kern: warning: [2022-01-25T10:35:59.115722566Z]: 21678 pages reserved
10.1.1.2: kern: info: [2022-01-25T10:35:59.115963566Z]: Tasks state (memory values in pages):
10.1.1.2: kern: info: [2022-01-25T10:35:59.116294566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
10.1.1.2: kern: info: [2022-01-25T10:35:59.116831566Z]: [ 664] 0 664 188197 4038 208896 0 -999 containerd
10.1.1.2: kern: info: [2022-01-25T10:35:59.117421566Z]: [ 1121] 0 1121 189038 5077 217088 0 -500 containerd
10.1.1.2: kern: info: [2022-01-25T10:35:59.117951566Z]: [ 1148] 0 1148 177754 517 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:35:59.118498566Z]: [ 1150] 0 1150 177690 537 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:35:59.119050566Z]: [ 1192] 50 1192 193598 2920 258048 0 -998 apid
10.1.1.2: kern: info: [2022-01-25T10:35:59.119551566Z]: [ 1193] 51 1193 193598 2697 249856 0 -998 trustd
10.1.1.2: kern: info: [2022-01-25T10:35:59.120071566Z]: [ 1289] 0 1289 177754 576 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:35:59.120631566Z]: [ 1309] 0 1309 483992 6601 516096 0 -999 kubelet
10.1.1.2: kern: info: [2022-01-25T10:35:59.121147566Z]: [ 1533] 0 1533 362 32 40960 0 0 udevd
10.1.1.2: kern: info: [2022-01-25T10:35:59.121660566Z]: [ 1542] 0 1542 177754 476 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:35:59.122209566Z]: [ 1564] 60 1564 3228260 439013 3743744 0 -998 etcd
10.1.1.2: kern: info: [2022-01-25T10:35:59.122709566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=runtime,mems_allowed=0,global_oom,task_memcg=/system/runtime,task=udevd,pid=1533,uid=0
10.1.1.2: kern: err: [2022-01-25T10:35:59.123507566Z]: Out of memory: Killed process 1533 (udevd) total-vm:1448kB, anon-rss:124kB, file-rss:4kB, shmem-rss:0kB, UID:0 pgtables:40kB oom_score_adj:0
10.1.1.2: kern: info: [2022-01-25T10:35:59.126226566Z]: oom_reaper: reaped process 1533 (udevd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
10.1.1.2: kern: warning: [2022-01-25T10:36:07.519924566Z]: init invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
10.1.1.2: kern: warning: [2022-01-25T10:36:07.520527566Z]: CPU: 0 PID: 636 Comm: init Tainted: G T 5.15.6-talos #1
10.1.1.2: kern: warning: [2022-01-25T10:36:07.521010566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
10.1.1.2: kern: warning: [2022-01-25T10:36:07.521668566Z]: Call Trace:
10.1.1.2: kern: warning: [2022-01-25T10:36:07.521871566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:36:07.522048566Z]: dump_stack_lvl+0x46/0x5a
10.1.1.2: kern: warning: [2022-01-25T10:36:07.522304566Z]: dump_header+0x45/0x1ee
10.1.1.2: kern: warning: [2022-01-25T10:36:07.522550566Z]: oom_kill_process.cold+0xb/0x10
10.1.1.2: kern: warning: [2022-01-25T10:36:07.522830566Z]: out_of_memory+0x22f/0x4e0
10.1.1.2: kern: warning: [2022-01-25T10:36:07.523086566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90
10.1.1.2: kern: warning: [2022-01-25T10:36:07.523429566Z]: __alloc_pages+0x307/0x320
10.1.1.2: kern: warning: [2022-01-25T10:36:07.523685566Z]: pagecache_get_page+0x120/0x3c0
10.1.1.2: kern: warning: [2022-01-25T10:36:07.523975566Z]: filemap_fault+0x5d3/0x920
10.1.1.2: kern: warning: [2022-01-25T10:36:07.524287566Z]: ? filemap_map_pages+0x2b0/0x440
10.1.1.2: kern: warning: [2022-01-25T10:36:07.524572566Z]: __do_fault+0x2f/0x90
10.1.1.2: kern: warning: [2022-01-25T10:36:07.524815566Z]: __handle_mm_fault+0x664/0xc00
10.1.1.2: kern: warning: [2022-01-25T10:36:07.525086566Z]: handle_mm_fault+0xc7/0x290
10.1.1.2: kern: warning: [2022-01-25T10:36:07.525344566Z]: exc_page_fault+0x1de/0x780
10.1.1.2: kern: warning: [2022-01-25T10:36:07.525605566Z]: ? asm_exc_page_fault+0x8/0x30
10.1.1.2: kern: warning: [2022-01-25T10:36:07.525878566Z]: asm_exc_page_fault+0x1e/0x30
10.1.1.2: kern: warning: [2022-01-25T10:36:07.526144566Z]: RIP: 0033:0x466d5d
10.1.1.2: kern: warning: [2022-01-25T10:36:07.526371566Z]: Code: Unable to access opcode bytes at RIP 0x466d33.
10.1.1.2: kern: warning: [2022-01-25T10:36:07.526731566Z]: RSP: 002b:000000c000071f10 EFLAGS: 00010212
10.1.1.2: kern: warning: [2022-01-25T10:36:07.527059566Z]: RAX: 0000000000000000 RBX: 0000000000002710 RCX: 0000000000466d5d
10.1.1.2: kern: warning: [2022-01-25T10:36:07.527471566Z]: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000c000071f10
10.1.1.2: kern: warning: [2022-01-25T10:36:07.527881566Z]: RBP: 000000c000071f20 R08: 00000000000048eb R09: 0000000004911458
10.1.1.2: kern: warning: [2022-01-25T10:36:07.528305566Z]: R10: 0000000000000000 R11: 0000000000000212 R12: 000000c000071950
10.1.1.2: kern: warning: [2022-01-25T10:36:07.528717566Z]: R13: 000000c00076e000 R14: 000000c0000004e0 R15: 00007f91dcfcd6dd
10.1.1.2: kern: warning: [2022-01-25T10:36:07.529129566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:36:07.529328566Z]: Mem-Info:
10.1.1.2: kern: warning: [2022-01-25T10:36:07.529515566Z]: active_anon:12625 inactive_anon:470431 isolated_anon:0\x0a active_file:11 inactive_file:275 isolated_file:32\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:4769 slab_unreclaimable:4646\x0a mapped:25 shmem:12651 pagetables:1405 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3267 free_pcp:776 free_cma:0
10.1.1.2: kern: warning: [2022-01-25T10:36:07.531560566Z]: Node 0 active_anon:50500kB inactive_anon:1881724kB active_file:44kB inactive_file:1100kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:100kB dirty:0kB writeback:0kBshmem:50604kB writeback_tmp:0kB kernel_stack:3648kB pagetables:5620kB all_unreclaimable? yes
10.1.1.2: kern: warning: [2022-01-25T10:36:07.533025566Z]: Node 0 DMA free:7588kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7744kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:36:07.534526566Z]: lowmem_reserve[]: 0 1890 1890 1890
10.1.1.2: kern: warning: [2022-01-25T10:36:07.534840566Z]: Node 0 DMA32 free:5480kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50500kB inactive_anon:1873980kB active_file:264kB inactive_file:1268kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:3364kB local_pcp:3096kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:36:07.536351566Z]: lowmem_reserve[]: 0 0 0 0
10.1.1.2: kern: warning: [2022-01-25T10:36:07.536600566Z]: Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 1*32kB (U) 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7588kB
10.1.1.2: kern: warning: [2022-01-25T10:36:07.537367566Z]: Node 0 DMA32: 176*4kB (UME) 99*8kB (UME) 43*16kB (UME) 23*32kB (UME) 12*64kB (UME) 5*128kB (UME) 1*256kB (U) 1*512kB (U) 1*1024kB (M) 0*2048kB 0*4096kB = 6120kB
10.1.1.2: kern: info: [2022-01-25T10:36:07.538275566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
10.1.1.2: kern: warning: [2022-01-25T10:36:07.538785566Z]: 12908 total pagecache pages
10.1.1.2: kern: warning: [2022-01-25T10:36:07.539043566Z]: 0 pages in swap cache
10.1.1.2: kern: warning: [2022-01-25T10:36:07.539277566Z]: Swap cache stats: add 0, delete 0, find 0/0
10.1.1.2: kern: warning: [2022-01-25T10:36:07.539598566Z]: Free swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:36:07.539809566Z]: Total swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:36:07.540026566Z]: 524156 pages RAM
10.1.1.2: kern: warning: [2022-01-25T10:36:07.540237566Z]: 0 pages HighMem/MovableOnly
10.1.1.2: kern: warning: [2022-01-25T10:36:07.540493566Z]: 21678 pages reserved
10.1.1.2: kern: info: [2022-01-25T10:36:07.540721566Z]: Tasks state (memory values in pages):
10.1.1.2: kern: info: [2022-01-25T10:36:07.541018566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
10.1.1.2: kern: info: [2022-01-25T10:36:07.541547566Z]: [ 664] 0 664 188197 4038 208896 0 -999 containerd
10.1.1.2: kern: info: [2022-01-25T10:36:07.542073566Z]: [ 1121] 0 1121 189038 5077 217088 0 -500 containerd
10.1.1.2: kern: info: [2022-01-25T10:36:07.542592566Z]: [ 1148] 0 1148 177754 525 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:36:07.543135566Z]: [ 1150] 0 1150 177690 574 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:36:07.543673566Z]: [ 1192] 50 1192 193598 2920 258048 0 -998 apid
10.1.1.2: kern: info: [2022-01-25T10:36:07.544174566Z]: [ 1193] 51 1193 193598 2697 249856 0 -998 trustd
10.1.1.2: kern: info: [2022-01-25T10:36:07.544677566Z]: [ 1289] 0 1289 177754 608 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:36:07.545227566Z]: [ 1309] 0 1309 483992 6601 516096 0 -999 kubelet
10.1.1.2: kern: info: [2022-01-25T10:36:07.545735566Z]: [ 1542] 0 1542 177754 476 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:36:07.546283566Z]: [ 1564] 60 1564 3228260 439015 3743744 0 -998 etcd
10.1.1.2: kern: info: [2022-01-25T10:36:07.546779566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init,mems_allowed=0,global_oom,task_memcg=/system/etcd,task=etcd,pid=1564,uid=60
10.1.1.2: kern: err: [2022-01-25T10:36:07.547557566Z]: Out of memory: Killed process 1564 (etcd) total-vm:12913040kB, anon-rss:1756060kB, file-rss:0kB, shmem-rss:0kB, UID:60 pgtables:3656kB oom_score_adj:-998
10.1.1.2: kern: info: [2022-01-25T10:36:07.623977566Z]: oom_reaper: reaped process 1564 (etcd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
10.1.1.2: user: warning: [2022-01-25T10:36:08.237973566Z]: [talos] service[kubelet](Running): Health check failed: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:36:08.242319566Z]: [talos] service[udevd](Waiting): Error running Process(["/sbin/udevd" "--resolve-names=never"]), going to restart forever: signal: killed
10.1.1.2: user: warning: [2022-01-25T10:36:08.270413566Z]: [talos] service[containerd](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:36:08.288671566Z]: [talos] service[containerd](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:36:08.374729566Z]: [talos] service[cri](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:36:08.755950566Z]: [talos] service[cri](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:36:09.370668566Z]: [talos] service[etcd](Waiting): Error running Containerd(etcd), going to restart forever: task "etcd" failed: exit code 137
10.1.1.2: daemon: info: [2022-01-25T10:36:13.259395566Z]: udevd[1620]: starting version 3.2.10
10.1.1.2: user: warning: [2022-01-25T10:36:13.273182566Z]: [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 1620
10.1.1.2: daemon: info: [2022-01-25T10:36:13.324745566Z]: udevd[1620]: starting eudev-3.2.10
10.1.1.2: user: warning: [2022-01-25T10:36:15.638078566Z]: [talos] service[etcd](Running): Started task etcd (PID 1650) for container etcd
10.1.1.2: user: warning: [2022-01-25T10:36:17.438818566Z]: [talos] service[kubelet](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:36:23.940888566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:36:29.648124566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:37:12.443080566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:37:15.238735566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:37:48.443314566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:37:58.790974566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:38:21.382821566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:38:45.415435566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:39:24.192809566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
10.1.1.2: user: warning: [2022-01-25T10:39:36.239241566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:39:38.168047566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:39:57.951717566Z]: [talos] service[kubelet](Running): Health check failed: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
10.1.1.2: kern: warning: [2022-01-25T10:40:16.865601566Z]: kubelet invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-999
10.1.1.2: kern: warning: [2022-01-25T10:40:16.866232566Z]: CPU: 1 PID: 1337 Comm: kubelet Tainted: G T 5.15.6-talos #1
10.1.1.2: kern: warning: [2022-01-25T10:40:16.866727566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
10.1.1.2: kern: warning: [2022-01-25T10:40:16.867399566Z]: Call Trace:
10.1.1.2: kern: warning: [2022-01-25T10:40:16.867594566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:40:16.867773566Z]: dump_stack_lvl+0x46/0x5a
10.1.1.2: kern: warning: [2022-01-25T10:40:16.868030566Z]: dump_header+0x45/0x1ee
10.1.1.2: kern: warning: [2022-01-25T10:40:16.868276566Z]: oom_kill_process.cold+0xb/0x10
10.1.1.2: kern: warning: [2022-01-25T10:40:16.868562566Z]: out_of_memory+0x22f/0x4e0
10.1.1.2: kern: warning: [2022-01-25T10:40:16.868820566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90
10.1.1.2: kern: warning: [2022-01-25T10:40:16.869167566Z]: __alloc_pages+0x307/0x320
10.1.1.2: kern: warning: [2022-01-25T10:40:16.869426566Z]: pagecache_get_page+0x120/0x3c0
10.1.1.2: kern: warning: [2022-01-25T10:40:16.869704566Z]: filemap_fault+0x5d3/0x920
10.1.1.2: kern: warning: [2022-01-25T10:40:16.869960566Z]: __do_fault+0x2f/0x90
10.1.1.2: kern: warning: [2022-01-25T10:40:16.870200566Z]: __handle_mm_fault+0x664/0xc00
10.1.1.2: kern: warning: [2022-01-25T10:40:16.870476566Z]: handle_mm_fault+0xc7/0x290
10.1.1.2: kern: warning: [2022-01-25T10:40:16.870735566Z]: exc_page_fault+0x1de/0x780
10.1.1.2: kern: warning: [2022-01-25T10:40:16.871003566Z]: ? asm_exc_page_fault+0x8/0x30
10.1.1.2: kern: warning: [2022-01-25T10:40:16.871278566Z]: asm_exc_page_fault+0x1e/0x30
10.1.1.2: kern: warning: [2022-01-25T10:40:16.871562566Z]: RIP: 0033:0x9b45d5
10.1.1.2: kern: warning: [2022-01-25T10:40:16.871789566Z]: Code: Unable to access opcode bytes at RIP 0x9b45ab.
10.1.1.2: kern: warning: [2022-01-25T10:40:16.872152566Z]: RSP: 002b:000000c000b436d8 EFLAGS: 00010246
10.1.1.2: kern: warning: [2022-01-25T10:40:16.872481566Z]: RAX: 000000c00009bcc0 RBX: 000000c000b4370e RCX: 000000c000385380
10.1.1.2: kern: warning: [2022-01-25T10:40:16.872897566Z]: RDX: 0000000005088bb0 RSI: 000000c0001034a0 RDI: 00000000009b4679
10.1.1.2: kern: warning: [2022-01-25T10:40:16.873315566Z]: RBP: 000000c000b437d0 R08: 000000c0001034a0 R09: 0000000000000018
10.1.1.2: kern: warning: [2022-01-25T10:40:16.873737566Z]: R10: 0000000000000020 R11: 0000000000000001 R12: 000000c000385380
10.1.1.2: kern: warning: [2022-01-25T10:40:16.874154566Z]: R13: 0000000000000000 R14: 000000c0007d7380 R15: 0000000000000000
10.1.1.2: kern: warning: [2022-01-25T10:40:16.874571566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:40:16.874765566Z]: Mem-Info:
10.1.1.2: kern: warning: [2022-01-25T10:40:16.874955566Z]: active_anon:12626 inactive_anon:470125 isolated_anon:0\x0a active_file:0 inactive_file:275 isolated_file:32\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:4966 slab_unreclaimable:4670\x0a mapped:30 shmem:12651 pagetables:1411 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3265 free_pcp:916 free_cma:0
10.1.1.2: kern: warning: [2022-01-25T10:40:16.876965566Z]: Node 0 active_anon:50504kB inactive_anon:1880500kB active_file:0kB inactive_file:1100kB unevictable:0kB isolated(anon):0kB isolated(file):128kB mapped:120kB dirty:0kB writeback:0kB shmem:50604kB writeback_tmp:0kB kernel_stack:3632kB pagetables:5644kB all_unreclaimable? no
10.1.1.2: kern: warning: [2022-01-25T10:40:16.878366566Z]: Node 0 DMA free:7600kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7716kB active_file:0kB inactive_file:8kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:40:16.879787566Z]: lowmem_reserve[]: 0 1890 1890 1890
10.1.1.2: kern: warning: [2022-01-25T10:40:16.880077566Z]: Node 0 DMA32 free:5460kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50504kB inactive_anon:1872784kB active_file:152kB inactive_file:1040kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:3664kB local_pcp:4kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:40:16.881629566Z]: lowmem_reserve[]: 0 0 0 0
10.1.1.2: kern: warning: [2022-01-25T10:40:16.881901566Z]: Node 0 DMA: 2*4kB (UM) 1*8kB (U) 2*16kB (UM) 0*32kB 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7600kB
10.1.1.2: kern: warning: [2022-01-25T10:40:16.882699566Z]: Node 0 DMA32: 157*4kB (UME) 49*8kB (UME) 34*16kB (UME) 11*32kB (UME) 10*64kB (UME) 5*128kB (UME) 0*256kB 1*512kB (U) 0*1024kB 1*2048kB (M) 0*4096kB = 5756kB
10.1.1.2: kern: info: [2022-01-25T10:40:16.883589566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
10.1.1.2: kern: warning: [2022-01-25T10:40:16.884116566Z]: 12987 total pagecache pages
10.1.1.2: kern: warning: [2022-01-25T10:40:16.884376566Z]: 0 pages in swap cache
10.1.1.2: kern: warning: [2022-01-25T10:40:16.884612566Z]: Swap cache stats: add 0, delete 0, find 0/0
10.1.1.2: kern: warning: [2022-01-25T10:40:16.884936566Z]: Free swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:40:16.885150566Z]: Total swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:40:16.885366566Z]: 524156 pages RAM
10.1.1.2: kern: warning: [2022-01-25T10:40:16.885581566Z]: 0 pages HighMem/MovableOnly
10.1.1.2: kern: warning: [2022-01-25T10:40:16.885840566Z]: 21678 pages reserved
10.1.1.2: kern: info: [2022-01-25T10:40:16.886071566Z]: Tasks state (memory values in pages):
10.1.1.2: kern: info: [2022-01-25T10:40:16.886372566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
10.1.1.2: kern: info: [2022-01-25T10:40:16.886940566Z]: [ 664] 0 664 188197 4105 208896 0 -999 containerd
10.1.1.2: kern: info: [2022-01-25T10:40:16.887481566Z]: [ 1121] 0 1121 189038 4902 217088 0 -500 containerd
10.1.1.2: kern: info: [2022-01-25T10:40:16.888013566Z]: [ 1148] 0 1148 177754 558 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:40:16.888564566Z]: [ 1150] 0 1150 177690 614 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:40:16.889110566Z]: [ 1192] 50 1192 193598 2894 258048 0 -998 apid
10.1.1.2: kern: info: [2022-01-25T10:40:16.889615566Z]: [ 1193] 51 1193 193598 2697 249856 0 -998 trustd
10.1.1.2: kern: info: [2022-01-25T10:40:16.890122566Z]: [ 1289] 0 1289 177754 624 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:40:16.890673566Z]: [ 1309] 0 1309 483992 6661 516096 0 -999 kubelet
10.1.1.2: kern: info: [2022-01-25T10:40:16.891200566Z]: [ 1620] 0 1620 362 31 36864 0 0 udevd
10.1.1.2: kern: info: [2022-01-25T10:40:16.891742566Z]: [ 1629] 0 1629 177690 496 98304 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:40:16.892293566Z]: [ 1650] 60 1650 3228292 438734 3739648 0 -998 etcd
10.1.1.2: kern: info: [2022-01-25T10:40:16.892804566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=kubelet,mems_allowed=0,global_oom,task_memcg=/system/runtime,task=udevd,pid=1620,uid=0
10.1.1.2: kern: err: [2022-01-25T10:40:16.893620566Z]: Out of memory: Killed process 1620 (udevd) total-vm:1448kB, anon-rss:124kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:36kB oom_score_adj:0
10.1.1.2: kern: info: [2022-01-25T10:40:16.894449566Z]: oom_reaper: reaped process 1620 (udevd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
10.1.1.2: kern: warning: [2022-01-25T10:40:41.309111566Z]: init invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
10.1.1.2: kern: warning: [2022-01-25T10:40:41.309735566Z]: CPU: 0 PID: 636 Comm: init Tainted: G T 5.15.6-talos #1
10.1.1.2: kern: warning: [2022-01-25T10:40:41.310230566Z]: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
10.1.1.2: kern: warning: [2022-01-25T10:40:41.310933566Z]: Call Trace:
10.1.1.2: kern: warning: [2022-01-25T10:40:41.311131566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:40:41.311321566Z]: dump_stack_lvl+0x46/0x5a
10.1.1.2: kern: warning: [2022-01-25T10:40:41.311595566Z]: dump_header+0x45/0x1ee
10.1.1.2: kern: warning: [2022-01-25T10:40:41.311849566Z]: oom_kill_process.cold+0xb/0x10
10.1.1.2: kern: warning: [2022-01-25T10:40:41.312138566Z]: out_of_memory+0x22f/0x4e0
10.1.1.2: kern: warning: [2022-01-25T10:40:41.312414566Z]: __alloc_pages_slowpath.constprop.0+0xba3/0xc90
10.1.1.2: kern: warning: [2022-01-25T10:40:41.312772566Z]: __alloc_pages+0x307/0x320
10.1.1.2: kern: warning: [2022-01-25T10:40:41.313083566Z]: pagecache_get_page+0x120/0x3c0
10.1.1.2: kern: warning: [2022-01-25T10:40:41.313374566Z]: filemap_fault+0x5d3/0x920
10.1.1.2: kern: warning: [2022-01-25T10:40:41.313697566Z]: ? filemap_map_pages+0x2b0/0x440
10.1.1.2: kern: warning: [2022-01-25T10:40:41.313989566Z]: __do_fault+0x2f/0x90
10.1.1.2: kern: warning: [2022-01-25T10:40:41.314239566Z]: __handle_mm_fault+0x664/0xc00
10.1.1.2: kern: warning: [2022-01-25T10:40:41.314527566Z]: handle_mm_fault+0xc7/0x290
10.1.1.2: kern: warning: [2022-01-25T10:40:41.314809566Z]: exc_page_fault+0x1de/0x780
10.1.1.2: kern: warning: [2022-01-25T10:40:41.315082566Z]: ? asm_exc_page_fault+0x8/0x30
10.1.1.2: kern: warning: [2022-01-25T10:40:41.315368566Z]: asm_exc_page_fault+0x1e/0x30
10.1.1.2: kern: warning: [2022-01-25T10:40:41.315650566Z]: RIP: 0033:0x466d5d
10.1.1.2: kern: warning: [2022-01-25T10:40:41.315878566Z]: Code: Unable to access opcode bytes at RIP 0x466d33.
10.1.1.2: kern: warning: [2022-01-25T10:40:41.316262566Z]: RSP: 002b:000000c000071f10 EFLAGS: 00010212
10.1.1.2: kern: warning: [2022-01-25T10:40:41.316602566Z]: RAX: 0000000000000000 RBX: 0000000000002710 RCX: 0000000000466d5d
10.1.1.2: kern: warning: [2022-01-25T10:40:41.317031566Z]: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000c000071f10
10.1.1.2: kern: warning: [2022-01-25T10:40:41.317509566Z]: RBP: 000000c000071f20 R08: 0000000000005b38 R09: 0000000004911458
10.1.1.2: kern: warning: [2022-01-25T10:40:41.317950566Z]: R10: 0000000000000000 R11: 0000000000000212 R12: 000000c000071950
10.1.1.2: kern: warning: [2022-01-25T10:40:41.318382566Z]: R13: 000000c00076e000 R14: 000000c0000004e0 R15: 00007f91dcfcd6dd
10.1.1.2: kern: warning: [2022-01-25T10:40:41.318839566Z]:
10.1.1.2: kern: warning: [2022-01-25T10:40:41.319032566Z]: Mem-Info:
10.1.1.2: kern: warning: [2022-01-25T10:40:41.319227566Z]: active_anon:12625 inactive_anon:470100 isolated_anon:0\x0a active_file:18 inactive_file:484 isolated_file:16\x0a unevictable:0 dirty:0 writeback:0\x0a slab_reclaimable:4964 slab_unreclaimable:4662\x0a mapped:51 shmem:12651 pagetables:1404 bounce:0\x0a kernel_misc_reclaimable:0\x0a free:3994 free_pcp:166 free_cma:0
10.1.1.2: kern: warning: [2022-01-25T10:40:41.321363566Z]: Node 0 active_anon:50500kB inactive_anon:1880400kB active_file:72kB inactive_file:2020kB unevictable:0kB isolated(anon):0kB isolated(file):64kB mapped:204kB dirty:0kB writeback:0kB shmem:50604kB writeback_tmp:0kB kernel_stack:3616kB pagetables:5616kB all_unreclaimable? yes
10.1.1.2: kern: warning: [2022-01-25T10:40:41.322793566Z]: Node 0 DMA free:7596kB min:40kB low:52kB high:64kB reserved_highatomic:0KB active_anon:0kB inactive_anon:7716kB active_file:0kB inactive_file:12kB unevictable:0kB writepending:0kBpresent:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:40:41.324215566Z]: lowmem_reserve[]: 0 1890 1890 1890
10.1.1.2: kern: warning: [2022-01-25T10:40:41.324507566Z]: Node 0 DMA32 free:7876kB min:5540kB low:7476kB high:9412kB reserved_highatomic:0KB active_anon:50500kB inactive_anon:1872684kB active_file:388kB inactive_file:1980kB unevictable:0kB writepending:0kB present:2080632kB managed:1994552kB mlocked:0kB bounce:0kB free_pcp:684kB local_pcp:252kB free_cma:0kB
10.1.1.2: kern: warning: [2022-01-25T10:40:41.326035566Z]: lowmem_reserve[]: 0 0 0 0
10.1.1.2: kern: warning: [2022-01-25T10:40:41.327177566Z]: Node 0 DMA: 1*4kB (U) 3*8kB (UM) 1*16kB (U) 0*32kB 2*64kB (UM) 2*128kB (UM) 2*256kB (UM) 1*512kB (U) 0*1024kB 1*2048kB (M) 1*4096kB (M) = 7596kB
10.1.1.2: kern: warning: [2022-01-25T10:40:41.327978566Z]: Node 0 DMA32: 105*4kB (UE) 120*8kB (UME) 79*16kB (UME) 30*32kB (UME) 17*64kB (UME) 7*128kB (UME) 1*256kB (M) 2*512kB (UM) 1*1024kB (M) 0*2048kB 0*4096kB = 7892kB
10.1.1.2: kern: info: [2022-01-25T10:40:41.328878566Z]: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
10.1.1.2: kern: warning: [2022-01-25T10:40:41.329400566Z]: 13313 total pagecache pages
10.1.1.2: kern: warning: [2022-01-25T10:40:41.329659566Z]: 0 pages in swap cache
10.1.1.2: kern: warning: [2022-01-25T10:40:41.329895566Z]: Swap cache stats: add 0, delete 0, find 0/0
10.1.1.2: kern: warning: [2022-01-25T10:40:41.330223566Z]: Free swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:40:41.330436566Z]: Total swap = 0kB
10.1.1.2: kern: warning: [2022-01-25T10:40:41.330655566Z]: 524156 pages RAM
10.1.1.2: kern: warning: [2022-01-25T10:40:41.330871566Z]: 0 pages HighMem/MovableOnly
10.1.1.2: kern: warning: [2022-01-25T10:40:41.331130566Z]: 21678 pages reserved
10.1.1.2: kern: info: [2022-01-25T10:40:41.331368566Z]: Tasks state (memory values in pages):
10.1.1.2: kern: info: [2022-01-25T10:40:41.331691566Z]: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
10.1.1.2: kern: info: [2022-01-25T10:40:41.332238566Z]: [ 664] 0 664 188197 4105 208896 0 -999 containerd
10.1.1.2: kern: info: [2022-01-25T10:40:41.332779566Z]: [ 1121] 0 1121 189038 4902 217088 0 -500 containerd
10.1.1.2: kern: info: [2022-01-25T10:40:41.333324566Z]: [ 1148] 0 1148 177754 558 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:40:41.333895566Z]: [ 1150] 0 1150 177690 614 106496 0 -998 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:40:41.334466566Z]: [ 1192] 50 1192 193598 2894 258048 0 -998 apid
10.1.1.2: kern: info: [2022-01-25T10:40:41.334992566Z]: [ 1193] 51 1193 193598 2697 249856 0 -998 trustd
10.1.1.2: kern: info: [2022-01-25T10:40:41.335526566Z]: [ 1289] 0 1289 177754 624 106496 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:40:41.336097566Z]: [ 1309] 0 1309 483992 6661 516096 0 -999 kubelet
10.1.1.2: kern: info: [2022-01-25T10:40:41.336628566Z]: [ 1629] 0 1629 177690 499 98304 0 -499 containerd-shim
10.1.1.2: kern: info: [2022-01-25T10:40:41.337192566Z]: [ 1650] 60 1650 3228292 438737 3739648 0 -998 etcd
10.1.1.2: kern: info: [2022-01-25T10:40:41.337721566Z]: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init,mems_allowed=0,global_oom,task_memcg=/system/etcd,task=etcd,pid=1650,uid=60
10.1.1.2: kern: err: [2022-01-25T10:40:41.338530566Z]: Out of memory: Killed process 1650 (etcd) total-vm:12913168kB, anon-rss:1754948kB, file-rss:0kB, shmem-rss:0kB, UID:60 pgtables:3652kB oom_score_adj:-998
10.1.1.2: kern: info: [2022-01-25T10:40:41.427771566Z]: oom_reaper: reaped process 1650 (etcd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
10.1.1.2: user: warning: [2022-01-25T10:40:41.876458566Z]: [talos] service[apid](Running): Health check failed: dial tcp 127.0.0.1:50000: i/o timeout
10.1.1.2: user: warning: [2022-01-25T10:40:42.044935566Z]: [talos] service[apid](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:40:42.070712566Z]: [talos] service[cri](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:40:42.081034566Z]: [talos] service[containerd](Running): Health check failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
10.1.1.2: user: warning: [2022-01-25T10:40:42.094682566Z]: [talos] service[udevd](Waiting): Error running Process(["/sbin/udevd" "--resolve-names=never"]), going to restart forever: signal: killed
10.1.1.2: user: warning: [2022-01-25T10:40:42.095537566Z]: [talos] kubernetes registry node watch error {"component": "controller-runtime", "controller": "cluster.KubernetesPullController", "error": "failed to list *v1.Node: Get \x5c"https://10.1.1.2:6443/api/v1/nodes?limit=500&resourceVersion=0\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:40:42.124883566Z]: [talos] service[containerd](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:40:42.202658566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "cluster.KubernetesPushController", "error": "error pushing to Kubernetes registry: failed to get node \x5c"talos-server-01\x5c": Get \x5c"https://10.1.1.2:6443/api/v1/nodes/talos-server-01?timeout=30s\x5c": dial tcp 10.1.1.2:6443: connect: connection refused"}
10.1.1.2: user: warning: [2022-01-25T10:40:42.500951566Z]: [talos] service[cri](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:40:43.264446566Z]: [talos] service[etcd](Waiting): Error running Containerd(etcd), going to restart forever: task "etcd" failed: exit code 137
10.1.1.2: daemon: info: [2022-01-25T10:40:47.122249566Z]: udevd[1704]: starting version 3.2.10
10.1.1.2: user: warning: [2022-01-25T10:40:47.123621566Z]: [talos] service[udevd](Running): Process Process(["/sbin/udevd" "--resolve-names=never"]) started with PID 1704
10.1.1.2: daemon: info: [2022-01-25T10:40:47.310820566Z]: udevd[1704]: starting eudev-3.2.10
10.1.1.2: user: warning: [2022-01-25T10:40:49.300690566Z]: [talos] service[etcd](Running): Started task etcd (PID 1731) for container etcd
10.1.1.2: user: warning: [2022-01-25T10:40:52.436744566Z]: [talos] service[kubelet](Running): Health check successful
10.1.1.2: user: warning: [2022-01-25T10:40:58.101161566Z]: [talos] controller failed {"component": "controller-runtime", "controller": "k8s.KubeletStaticPodController", "error": "error refreshing pod status: error fetching pod status: an error on the server (\x5c"Authorization error (user=apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)\x5c") has prevented the request from succeeding"}
yonggan@Yonggan-Aspire-A5 ~/N/P/talos> talosctl health
healthcheck error: rpc error: code = Unknown desc = error discovering nodes: Get "https://10.1.1.2:6443/api/v1/nodes": dial tcp 10.1.1.2:6443: connect: connection refused
yonggan@Yonggan-Aspire-A5 ~/N/P/talos [1]>
Environment