k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
27.65k stars 2.32k forks source link

System-Upgrade Controller not able to schedule pods due to pod affinity/selector, preemption not helpful for scheduling #9350

Closed Bonn93 closed 1 month ago

Bonn93 commented 7 months ago

Environmental Info: K3s Version:

k3s version v1.28.2+k3s1 (6330a5b4)
go version go1.20.8

Node(s) CPU architecture, OS, and Version:

Linux k3s-server.internal.self-hosted.io 4.18.0-513.11.1.el8_9.x86_64 #1 SMP Wed Jan 10 22:58:54 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration: 3 Servers, 1 Server, 2 Agents

Describe the bug: upgrade-controller pods stuck and never scheduled

max@valhalla:~$ kubectl -n system-upgrade describe plan/server-plan 
Name:         server-plan
Namespace:    system-upgrade
Labels:       <none>
Annotations:  <none>
API Version:  upgrade.cattle.io/v1
Kind:         Plan
Metadata:
  Creation Timestamp:  2023-09-29T03:48:15Z
  Generation:          1
  Resource Version:    721789
  UID:                 84d1a181-8f5f-4d27-835f-6913f8c92778
Spec:
  Channel:      https://update.k3s.io/v1-release/channels/latest
  Concurrency:  1
  Cordon:       true
  Node Selector:
    Match Expressions:
      Key:       node-role.kubernetes.io/control-plane
      Operator:  In
      Values:
        true
  Service Account Name:  system-upgrade
  Upgrade:
    Image:  rancher/k3s-upgrade
Status:
  Conditions:
    Last Update Time:  2023-10-13T20:50:54Z
    Reason:            PlanIsValid
    Status:            True
    Type:              Validated
    Last Update Time:  2023-10-13T20:50:54Z
    Reason:            Channel
    Status:            True
    Type:              LatestResolved
  Latest Hash:         bd43117482ce9baecbde43be147e9c6a54453a66f43f619c260ccf10
  Latest Version:      v1.28.2-k3s1
Events:                <none>
max@valhalla:~$ kubectl -n system-upgrade describe po/system-upgrade-controller-6f7685c6b6-lsqlg
Name:             system-upgrade-controller-6f7685c6b6-lsqlg
Namespace:        system-upgrade
Priority:         0
Service Account:  system-upgrade
Node:             <none>
Labels:           pod-template-hash=6f7685c6b6
                  upgrade.cattle.io/controller=system-upgrade-controller
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/system-upgrade-controller-6f7685c6b6
Containers:
  system-upgrade-controller:
    Image:           rancher/system-upgrade-controller:v0.13.1
    Port:            <none>
    Host Port:       <none>
    SeccompProfile:  RuntimeDefault
    Environment Variables from:
      default-controller-env  ConfigMap  Optional: false
    Environment:
      SYSTEM_UPGRADE_CONTROLLER_NAME:        (v1:metadata.labels['upgrade.cattle.io/controller'])
      SYSTEM_UPGRADE_CONTROLLER_NAMESPACE:  system-upgrade (v1:metadata.namespace)
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/pki from etc-pki (ro)
      /etc/ssl from etc-ssl (ro)
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x5ptj (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  etc-ssl:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl
    HostPathType:  DirectoryOrCreate
  etc-pki:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/pki
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-x5ptj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                             node-role.kubernetes.io/controlplane:NoSchedule op=Exists
                             node-role.kubernetes.io/etcd:NoExecute op=Exists
                             node-role.kubernetes.io/master:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  5m31s (x300 over 25h)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
max@valhalla:~$ kubectl -n system-upgrade describe po/system-upgrade-controller-68bfff5fb5-q2sj8
Name:             system-upgrade-controller-68bfff5fb5-q2sj8
Namespace:        system-upgrade
Priority:         0
Service Account:  system-upgrade
Node:             <none>
Labels:           pod-template-hash=68bfff5fb5
                  upgrade.cattle.io/controller=system-upgrade-controller
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/system-upgrade-controller-68bfff5fb5
Containers:
  system-upgrade-controller:
    Image:           rancher/system-upgrade-controller:v0.13.2
    Port:            <none>
    Host Port:       <none>
    SeccompProfile:  RuntimeDefault
    Environment Variables from:
      default-controller-env  ConfigMap  Optional: false
    Environment:
      SYSTEM_UPGRADE_CONTROLLER_NAME:        (v1:metadata.labels['upgrade.cattle.io/controller'])
      SYSTEM_UPGRADE_CONTROLLER_NAMESPACE:  system-upgrade (v1:metadata.namespace)
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/pki from etc-pki (ro)
      /etc/ssl from etc-ssl (ro)
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2ds9t (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  etc-ssl:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl
    HostPathType:  DirectoryOrCreate
  etc-pki:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/pki
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-2ds9t:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                             node-role.kubernetes.io/controlplane:NoSchedule op=Exists
                             node-role.kubernetes.io/etcd:NoExecute op=Exists
                             node-role.kubernetes.io/master:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  12m                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
  Warning  FailedScheduling  2m4s (x2 over 7m4s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..

Steps To Reproduce: Apply system-upgrade to channel via https://docs.k3s.io/upgrades/automated

Expected behavior: System upgrades are applied

Actual behavior: System upgrades are never run because pods are not able to be scheduled, but unsure what the params are to get them scheduled, or why they're not being scheduled.

Additional context / logs:

Bonn93 commented 7 months ago

I noticed my master seems to have taints/nonscheduled..?

max@valhalla:~$ kubectl describe nodes | grep Taints
Taints:             node.kubernetes.io/unschedulable:NoSchedule
Taints:             <none>
Taints:             <none>
brandond commented 7 months ago

Describe that node to see why it is tainted unschedulable.

Bonn93 commented 7 months ago
max@valhalla:~$ kubectl describe nodes
Name:               k3s-server.internal.self-hosted.io
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=k3s
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k3s-server.internal.self-hosted.io
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=true
                    node-role.kubernetes.io/master=true
                    node.kubernetes.io/instance-type=k3s
                    plan.upgrade.cattle.io/server-plan=bd43117482ce9baecbde43be147e9c6a54453a66f43f619c260ccf10
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"b2:b9:15:1d:7a:bc"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.5.100.10
                    k3s.io/hostname: k3s-server.internal.self-hosted.io
                    k3s.io/internal-ip: 10.5.100.10
                    k3s.io/node-args: ["server"]
                    k3s.io/node-config-hash: 73A7SMJ4D7PWMEBT7QVDPVFUYYRUM2UJBWJC4LF77IDAAFKPEKXA====
                    k3s.io/node-env: {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/ab2055bc72380bad965b219e8688ac02b2e1b665cad6bdde1f8f087637aa81df"}
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 29 Sep 2023 10:34:22 +1000
Taints:             node.kubernetes.io/unschedulable:NoSchedule
Unschedulable:      true

Most docs I find state the server usually runs workloads too, im not sure why the taints and no schdeule is present, most of the other issues with this, the workarounds don't work.

Bonn93 commented 7 months ago

Seems uncordoning the node did the trick, but not sure how or why was in place?

max@valhalla:~$ kubectl -n system-upgrade get po
NAME                                                              READY   STATUS     RESTARTS   AGE
system-upgrade-controller-68bfff5fb5-q2sj8                        1/1     Running    0          22h
apply-agent-plan-on-k3s-agent-01-with-abb6025698c795b8e21-x7lzg   0/1     Init:0/2   0          16s
apply-server-plan-on-k3s-server-with-abb6025698c795b8e219-qf2qf   1/1     Running    0          16s
brandond commented 7 months ago

That taint is usually added by the controller-manager when there is some problem reported by the kubelet. It should be removed when the problem goes away. You didn't include the full describe output though so I can't see what that might have been.

tim-oe commented 6 months ago

ran into the same issue doing same work around fixed the issue. when i need to upgrade again if i hit this issue what information would would be beneficial to post? describe node with taint?

brandond commented 6 months ago

kubectl describe node or kubectl get node -o yaml would be great.

github-actions[bot] commented 4 months ago

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 45 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

tim-oe commented 4 months ago

still looking into capturing the data for above, got distracted by my actual job

github-actions[bot] commented 3 months ago

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 45 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

tim-oe commented 3 months ago

still trying to get cycles to try again, maybe this weekend...

tim-oe commented 2 months ago

i just deployed the controller and it upgraded without error

SYSTEM_UPGRADE_CONTROLLER_VERSION=v0.13.4 K3S_VERSION=v1.30.2+k3s1 hee's node info anyway

apiVersion: v1
items:
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      alpha.kubernetes.io/provided-node-ip: 192.168.1.81
      flannel.alpha.coreos.com/backend-data: '{"VNI":1,"VtepMAC":"46:ee:d9:4d:6c:f2"}'
      flannel.alpha.coreos.com/backend-type: vxlan
      flannel.alpha.coreos.com/kube-subnet-manager: "true"
      flannel.alpha.coreos.com/public-ip: 192.168.1.81
      k3s.io/hostname: tec-kube-ctlr
      k3s.io/internal-ip: 192.168.1.81
      k3s.io/node-args: '["server"]'
      k3s.io/node-config-hash: YWWKL5DENFWV72F5II3YCDXCNJD4CSEJ3ALS3FOVHHEHTAWT2PIA====
      k3s.io/node-env: '{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/04062516f863dba6fbbbb251ae40e2cc82756b587e30fc88e9659564ec85a68f"}'
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2023-10-02T05:12:54Z"
    finalizers:
    - wrangler.cattle.io/node
    labels:
      beta.kubernetes.io/arch: arm64
      beta.kubernetes.io/instance-type: k3s
      beta.kubernetes.io/os: linux
      k3s-upgrade: "true"
      kubernetes.io/arch: arm64
      kubernetes.io/hostname: tec-kube-ctlr
      kubernetes.io/os: linux
      node-role.kubernetes.io/control-plane: "true"
      node.kubernetes.io/instance-type: k3s
      plan.upgrade.cattle.io/k3s-server: 5123f63f9cb4caadecdca8849a38540d48ffbcdc5e26e00fe6680c01
    name: tec-kube-ctlr
    resourceVersion: "8984451"
    uid: 2ed2a664-20b3-47a5-9076-90233aed110a
  spec:
    podCIDR: 10.42.0.0/24
    podCIDRs:
    - 10.42.0.0/24
    providerID: k3s://tec-kube-ctlr
  status:
    addresses:
    - address: 192.168.1.81
      type: InternalIP
    - address: tec-kube-ctlr
      type: Hostname
    allocatable:
      cpu: "4"
      ephemeral-storage: "119237286614"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      hugepages-32Mi: "0"
      hugepages-64Ki: "0"
      memory: 3880980Ki
      pods: "110"
    capacity:
      cpu: "4"
      ephemeral-storage: 122571224Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      hugepages-32Mi: "0"
      hugepages-64Ki: "0"
      memory: 3880980Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: "2024-06-30T01:18:53Z"
      lastTransitionTime: "2023-10-02T05:12:54Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2024-06-30T01:18:53Z"
      lastTransitionTime: "2023-10-02T05:12:54Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2024-06-30T01:18:53Z"
      lastTransitionTime: "2023-10-02T05:12:54Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2024-06-30T01:18:53Z"
      lastTransitionTime: "2024-06-30T01:12:16Z"
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - docker.io/rancher/klipper-helm@sha256:c2fd922a9a361ac5ec7ef225a46aaaad1e79ec3acc3cf176f60cd09a11683dd5
      - docker.io/rancher/klipper-helm:v0.8.4-build20240523
      sizeBytes: 87267777
    - names:
      - docker.io/rancher/klipper-helm@sha256:b0b0c4f73f2391697edb52adffe4fc490de1c8590606024515bb906b2813554a
      - docker.io/rancher/klipper-helm:v0.8.2-build20230815
      sizeBytes: 83499502
    - names:
      - docker.io/rancher/k3s-upgrade@sha256:cb1ba64ab333aee0c68eeaeb2be524a6b3db1da3d266f2415966e98158c19e63
      - docker.io/rancher/k3s-upgrade:v1.30.2-k3s1
      sizeBytes: 59900194
    - names:
      - docker.io/rancher/k3s-upgrade@sha256:dce0af2b0b3d38efa6636e72359163c47c3e4c00ce801552c648acd4f28f548f
      - docker.io/rancher/k3s-upgrade:v1.29.2-k3s1
      sizeBytes: 59159643
    - names:
      - docker.io/rancher/mirrored-library-traefik@sha256:aaec134463b277ca7aa4f88807c8b67f2ec05d92a8f0432c0540b7ecc8fe724a
      - docker.io/rancher/mirrored-library-traefik:2.9.10
      sizeBytes: 36510615
    - names:
      - docker.io/rancher/mirrored-metrics-server@sha256:c2dfd72bafd6406ed306d9fbd07f55c496b004293d13d3de88a4567eacc36558
      - docker.io/rancher/mirrored-metrics-server:v0.6.3
      sizeBytes: 27955994
    - names:
      - docker.io/rancher/mirrored-metrics-server@sha256:20b8b36f8cac9e25aa2a0ff35147b13643bfec603e7e7480886632330a3bbc59
      - docker.io/rancher/mirrored-metrics-server:v0.7.0
      sizeBytes: 17809919
    - names:
      - docker.io/rancher/local-path-provisioner@sha256:3617a2fb32bf59d06861dd4c3cfb7ba7e66ae2a34bb5443e625fe490df463c71
      - docker.io/rancher/local-path-provisioner:v0.0.27
      sizeBytes: 17130914
    - names:
      - docker.io/rancher/mirrored-coredns-coredns@sha256:a11fafae1f8037cbbd66c5afa40ba2423936b72b4fd50a7034a7e8b955163594
      - docker.io/rancher/mirrored-coredns-coredns:1.10.1
      sizeBytes: 14556850
    - names:
      - docker.io/rancher/local-path-provisioner@sha256:5bb33992a4ec3034c28b5e0b3c4c2ac35d3613b25b79455eb4b1a95adc82cdc0
      - docker.io/rancher/local-path-provisioner:v0.0.24
      sizeBytes: 13884168
    - names:
      - docker.io/rancher/kubectl@sha256:9be095ca0bbc74e8947a1d4a0258875304b590057d858eb9738de000f88a473e
      - docker.io/rancher/kubectl:v1.25.4
      sizeBytes: 13045642
    - names:
      - docker.io/rancher/system-upgrade-controller@sha256:3df6d01b9eb583a78c309ce0b2cfeed98a9af97983e4ea96bf53410dd56c6f45
      - docker.io/rancher/system-upgrade-controller:v0.13.4
      sizeBytes: 9794197
    - names:
      - docker.io/rancher/klipper-lb@sha256:fa2257de248f46c303d0f39a8ebe8644ba5ac63d332c7d02bf6ee26a981243bc
      - docker.io/rancher/klipper-lb:v0.4.5
      sizeBytes: 7877529
    - names:
      - docker.io/rancher/klipper-lb@sha256:d6780e97ac25454b56f88410b236d52572518040f11d0db5c6baaac0d2fcf860
      - docker.io/rancher/klipper-lb:v0.4.4
      sizeBytes: 5068868
    - names:
      - docker.io/rancher/klipper-lb@sha256:558dcf96bf0800d9977ef46dca18411752618cd9dd06daeb99460c0a301d0a60
      - docker.io/rancher/klipper-lb:v0.4.7
      sizeBytes: 4939041
    - names:
      - docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893
      - docker.io/rancher/mirrored-pause:3.6
      sizeBytes: 253243
    nodeInfo:
      architecture: arm64
      bootID: 6c97037e-a4cd-4fa6-a9bb-8a9c4030b209
      containerRuntimeVersion: containerd://1.7.17-k3s1
      kernelVersion: 5.15.0-1055-raspi
      kubeProxyVersion: v1.30.2+k3s1
      kubeletVersion: v1.30.2+k3s1
      machineID: ef24a964b9bc4d42b6caafc4a6110bfa
      operatingSystem: linux
      osImage: Ubuntu 22.04.4 LTS
      systemUUID: ef24a964b9bc4d42b6caafc4a6110bfa
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      alpha.kubernetes.io/provided-node-ip: 192.168.1.82
      flannel.alpha.coreos.com/backend-data: '{"VNI":1,"VtepMAC":"1e:46:fb:27:f2:9b"}'
      flannel.alpha.coreos.com/backend-type: vxlan
      flannel.alpha.coreos.com/kube-subnet-manager: "true"
      flannel.alpha.coreos.com/public-ip: 192.168.1.82
      k3s.io/hostname: tec-kube-n1
      k3s.io/internal-ip: 192.168.1.82
      k3s.io/node-args: '["agent"]'
      k3s.io/node-config-hash: NUPA2EKVSM5EQD6IRHCFXUMEPHK235KEXMWBGGUK32JJHHJ35JCQ====
      k3s.io/node-env: '{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/04062516f863dba6fbbbb251ae40e2cc82756b587e30fc88e9659564ec85a68f","K3S_TOKEN":"********","K3S_URL":"https://tec-kube-ctlr:6443"}'
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2023-10-03T05:22:18Z"
    finalizers:
    - wrangler.cattle.io/node
    labels:
      beta.kubernetes.io/arch: arm64
      beta.kubernetes.io/instance-type: k3s
      beta.kubernetes.io/os: linux
      k3s-upgrade: "true"
      kubernetes.io/arch: arm64
      kubernetes.io/hostname: tec-kube-n1
      kubernetes.io/os: linux
      node.kubernetes.io/instance-type: k3s
      plan.upgrade.cattle.io/k3s-agent: 5123f63f9cb4caadecdca8849a38540d48ffbcdc5e26e00fe6680c01
    name: tec-kube-n1
    resourceVersion: "8984330"
    uid: e737ecec-b204-4cbc-979c-372200b1c900
  spec:
    podCIDR: 10.42.1.0/24
    podCIDRs:
    - 10.42.1.0/24
    providerID: k3s://tec-kube-n1
  status:
    addresses:
    - address: 192.168.1.82
      type: InternalIP
    - address: tec-kube-n1
      type: Hostname
    allocatable:
      cpu: "4"
      ephemeral-storage: "119237286614"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      hugepages-32Mi: "0"
      hugepages-64Ki: "0"
      memory: 3880980Ki
      pods: "110"
    capacity:
      cpu: "4"
      ephemeral-storage: 122571224Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      hugepages-32Mi: "0"
      hugepages-64Ki: "0"
      memory: 3880980Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: "2024-06-30T01:15:41Z"
      lastTransitionTime: "2024-03-13T04:30:18Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2024-06-30T01:15:41Z"
      lastTransitionTime: "2024-03-13T04:30:18Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2024-06-30T01:15:41Z"
      lastTransitionTime: "2024-03-13T04:30:18Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2024-06-30T01:15:41Z"
      lastTransitionTime: "2024-06-30T01:15:41Z"
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - docker.io/library/sonarqube@sha256:c6c8096375002d4cb2ef64b89a2736ad572812a87a2917d92e7e59384b9f6f65
      - docker.io/library/sonarqube:10.2-community
      sizeBytes: 475897165
    - names:
      - docker.io/jenkins/jenkins@sha256:662adb3b4f0e77a5f107b7d99af8c868707a4abc3808c381a15b170dfb417bea
      - docker.io/jenkins/jenkins:lts-jdk17
      sizeBytes: 287704341
    - names:
      - quay.io/argoproj/argocd@sha256:5f200a0efcf08abfd61d28165893edc9dce48261970d3280b7faef93617a43aa
      - quay.io/argoproj/argocd:v2.10.2
      sizeBytes: 160425986
    - names:
      - registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
      sizeBytes: 95119480
    - names:
      - docker.io/rancher/klipper-helm@sha256:c2fd922a9a361ac5ec7ef225a46aaaad1e79ec3acc3cf176f60cd09a11683dd5
      - docker.io/rancher/klipper-helm:v0.8.4-build20240523
      sizeBytes: 87267777
    - names:
      - docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
      - docker.io/kubernetesui/dashboard:v2.7.0
      sizeBytes: 74084559
    - names:
      - docker.io/kubernetesui/dashboard@sha256:148991563e374c83b75e8c51bca75f512d4f006ddc791e96a91f1c7420b60bd9
      - docker.io/kubernetesui/dashboard:v2.2.0
      sizeBytes: 66329887
    - names:
      - docker.io/rancher/k3s-upgrade@sha256:cb1ba64ab333aee0c68eeaeb2be524a6b3db1da3d266f2415966e98158c19e63
      - docker.io/rancher/k3s-upgrade:v1.30.2-k3s1
      sizeBytes: 59900194
    - names:
      - docker.io/rancher/k3s-upgrade@sha256:dce0af2b0b3d38efa6636e72359163c47c3e4c00ce801552c648acd4f28f548f
      - docker.io/rancher/k3s-upgrade:v1.29.2-k3s1
      sizeBytes: 59159643
    - names:
      - docker.io/rancher/mirrored-library-traefik@sha256:606c4c924d9edd6d028a010c8f173dceb34046ed64fabdbce9ff29b2cf2b3042
      - docker.io/rancher/mirrored-library-traefik:2.10.7
      sizeBytes: 40222856
    - names:
      - docker.io/rancher/mirrored-library-traefik@sha256:ca9c8fbe001070c546a75184e3fd7f08c3e47dfc1e89bff6fe2edd302accfaec
      - docker.io/rancher/mirrored-library-traefik:2.10.5
      sizeBytes: 40129288
    - names:
      - gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:b31bcf7ef4420ce7108e7fc10b6c00343b21257c945eec94c21598e72a8f2de0
      - gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
      sizeBytes: 29286715
    - names:
      - docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
      - docker.io/kubernetesui/metrics-scraper:v1.0.8
      sizeBytes: 18306114
    - names:
      - docker.io/rancher/local-path-provisioner@sha256:aee53cadc62bd023911e7f077877d047c5b3c269f9bba25724d558654f43cea0
      - docker.io/rancher/local-path-provisioner:v0.0.26
      sizeBytes: 15933947
    - names:
      - docker.io/rancher/kubectl@sha256:9be095ca0bbc74e8947a1d4a0258875304b590057d858eb9738de000f88a473e
      - docker.io/rancher/kubectl:v1.25.4
      sizeBytes: 13045642
    - names:
      - docker.io/rancher/klipper-lb@sha256:fa2257de248f46c303d0f39a8ebe8644ba5ac63d332c7d02bf6ee26a981243bc
      - docker.io/rancher/klipper-lb:v0.4.5
      sizeBytes: 7877529
    - names:
      - docker.io/rancher/klipper-lb@sha256:d6780e97ac25454b56f88410b236d52572518040f11d0db5c6baaac0d2fcf860
      - docker.io/rancher/klipper-lb:v0.4.4
      sizeBytes: 5068868
    - names:
      - docker.io/rancher/klipper-lb@sha256:558dcf96bf0800d9977ef46dca18411752618cd9dd06daeb99460c0a301d0a60
      - docker.io/rancher/klipper-lb:v0.4.7
      sizeBytes: 4939041
    - names:
      - docker.io/rancher/mirrored-library-busybox@sha256:125dfcbe72a0158c16781d3ad254c0d226a6534b59cc7c2bf549cdd50c6e8989
      - docker.io/rancher/mirrored-library-busybox:1.34.1
      sizeBytes: 2000508
    - names:
      - docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79
      - docker.io/library/busybox:latest
      sizeBytes: 1920927
    - names:
      - docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893
      - docker.io/rancher/mirrored-pause:3.6
      sizeBytes: 253243
    nodeInfo:
      architecture: arm64
      bootID: 2df884ca-9789-4e01-863f-fe1063154796
      containerRuntimeVersion: containerd://1.7.17-k3s1
      kernelVersion: 5.15.0-1055-raspi
      kubeProxyVersion: v1.30.2+k3s1
      kubeletVersion: v1.30.2+k3s1
      machineID: ef24a964b9bc4d42b6caafc4a6110bfa
      operatingSystem: linux
      osImage: Ubuntu 22.04.4 LTS
      systemUUID: ef24a964b9bc4d42b6caafc4a6110bfa
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      alpha.kubernetes.io/provided-node-ip: 192.168.1.79
      flannel.alpha.coreos.com/backend-data: '{"VNI":1,"VtepMAC":"26:dc:d5:9c:9e:86"}'
      flannel.alpha.coreos.com/backend-type: vxlan
      flannel.alpha.coreos.com/kube-subnet-manager: "true"
      flannel.alpha.coreos.com/public-ip: 192.168.1.79
      k3s.io/hostname: tec-kube-n2
      k3s.io/internal-ip: 192.168.1.79
      k3s.io/node-args: '["agent"]'
      k3s.io/node-config-hash: NUPA2EKVSM5EQD6IRHCFXUMEPHK235KEXMWBGGUK32JJHHJ35JCQ====
      k3s.io/node-env: '{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/04062516f863dba6fbbbb251ae40e2cc82756b587e30fc88e9659564ec85a68f","K3S_TOKEN":"********","K3S_URL":"https://tec-kube-ctlr:6443"}'
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2023-10-03T05:27:30Z"
    finalizers:
    - wrangler.cattle.io/node
    labels:
      beta.kubernetes.io/arch: arm64
      beta.kubernetes.io/instance-type: k3s
      beta.kubernetes.io/os: linux
      k3s-upgrade: "true"
      kubernetes.io/arch: arm64
      kubernetes.io/hostname: tec-kube-n2
      kubernetes.io/os: linux
      node.kubernetes.io/instance-type: k3s
      plan.upgrade.cattle.io/k3s-agent: 5123f63f9cb4caadecdca8849a38540d48ffbcdc5e26e00fe6680c01
    name: tec-kube-n2
    resourceVersion: "8984180"
    uid: 2d8bd85b-f722-4676-8d54-8341ad408f26
  spec:
    podCIDR: 10.42.3.0/24
    podCIDRs:
    - 10.42.3.0/24
    providerID: k3s://tec-kube-n2
  status:
    addresses:
    - address: 192.168.1.79
      type: InternalIP
    - address: tec-kube-n2
      type: Hostname
    allocatable:
      cpu: "4"
      ephemeral-storage: "119237286614"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      hugepages-32Mi: "0"
      hugepages-64Ki: "0"
      memory: 3880980Ki
      pods: "110"
    capacity:
      cpu: "4"
      ephemeral-storage: 122571224Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      hugepages-32Mi: "0"
      hugepages-64Ki: "0"
      memory: 3880980Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: "2024-06-30T01:15:12Z"
      lastTransitionTime: "2024-03-12T05:42:14Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2024-06-30T01:15:12Z"
      lastTransitionTime: "2024-03-12T05:42:14Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2024-06-30T01:15:12Z"
      lastTransitionTime: "2024-03-12T05:42:14Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2024-06-30T01:15:12Z"
      lastTransitionTime: "2024-06-30T01:13:51Z"
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - docker.io/library/sonarqube@sha256:c6c8096375002d4cb2ef64b89a2736ad572812a87a2917d92e7e59384b9f6f65
      - docker.io/library/sonarqube:10.2-community
      sizeBytes: 475897165
    - names:
      - docker.io/jenkins/jenkins@sha256:662adb3b4f0e77a5f107b7d99af8c868707a4abc3808c381a15b170dfb417bea
      - docker.io/jenkins/jenkins:lts-jdk17
      sizeBytes: 287704341
    - names:
      - quay.io/argoproj/argocd@sha256:5f200a0efcf08abfd61d28165893edc9dce48261970d3280b7faef93617a43aa
      - quay.io/argoproj/argocd:v2.10.2
      sizeBytes: 160425986
    - names:
      - registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
      sizeBytes: 95119480
    - names:
      - docker.io/rancher/klipper-helm@sha256:c2fd922a9a361ac5ec7ef225a46aaaad1e79ec3acc3cf176f60cd09a11683dd5
      - docker.io/rancher/klipper-helm:v0.8.4-build20240523
      sizeBytes: 87267777
    - names:
      - docker.io/rancher/klipper-helm@sha256:b0b0c4f73f2391697edb52adffe4fc490de1c8590606024515bb906b2813554a
      - docker.io/rancher/klipper-helm:v0.8.2-build20230815
      sizeBytes: 83499502
    - names:
      - docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
      - docker.io/kubernetesui/dashboard:v2.7.0
      sizeBytes: 74084559
    - names:
      - docker.io/kubernetesui/dashboard@sha256:148991563e374c83b75e8c51bca75f512d4f006ddc791e96a91f1c7420b60bd9
      - docker.io/kubernetesui/dashboard:v2.2.0
      sizeBytes: 66329887
    - names:
      - docker.io/rancher/k3s-upgrade@sha256:cb1ba64ab333aee0c68eeaeb2be524a6b3db1da3d266f2415966e98158c19e63
      - docker.io/rancher/k3s-upgrade:v1.30.2-k3s1
      sizeBytes: 59900194
    - names:
      - docker.io/rancher/k3s-upgrade@sha256:dce0af2b0b3d38efa6636e72359163c47c3e4c00ce801552c648acd4f28f548f
      - docker.io/rancher/k3s-upgrade:v1.29.2-k3s1
      sizeBytes: 59159643
    - names:
      - docker.io/rancher/mirrored-library-traefik@sha256:606c4c924d9edd6d028a010c8f173dceb34046ed64fabdbce9ff29b2cf2b3042
      - docker.io/rancher/mirrored-library-traefik:2.10.7
      sizeBytes: 40222856
    - names:
      - docker.io/rancher/mirrored-library-traefik@sha256:ca9c8fbe001070c546a75184e3fd7f08c3e47dfc1e89bff6fe2edd302accfaec
      - docker.io/rancher/mirrored-library-traefik:2.10.5
      sizeBytes: 40129288
    - names:
      - registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334
      sizeBytes: 21986963
    - names:
      - docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
      - docker.io/kubernetesui/metrics-scraper:v1.0.8
      sizeBytes: 18306114
    - names:
      - docker.io/rancher/mirrored-metrics-server@sha256:20b8b36f8cac9e25aa2a0ff35147b13643bfec603e7e7480886632330a3bbc59
      - docker.io/rancher/mirrored-metrics-server:v0.7.0
      sizeBytes: 17809919
    - names:
      - docker.io/rancher/local-path-provisioner@sha256:aee53cadc62bd023911e7f077877d047c5b3c269f9bba25724d558654f43cea0
      - docker.io/rancher/local-path-provisioner:v0.0.26
      sizeBytes: 15933947
    - names:
      - docker.io/library/redis@sha256:45de526e9fbc1a4b183957ab93a448294181fae10ced9184fc6efe9956ca0ccc
      - docker.io/library/redis:7.0.14-alpine
      sizeBytes: 13519061
    - names:
      - docker.io/rancher/kubectl@sha256:9be095ca0bbc74e8947a1d4a0258875304b590057d858eb9738de000f88a473e
      - docker.io/rancher/kubectl:v1.25.4
      sizeBytes: 13045642
    - names:
      - docker.io/rancher/klipper-lb@sha256:fa2257de248f46c303d0f39a8ebe8644ba5ac63d332c7d02bf6ee26a981243bc
      - docker.io/rancher/klipper-lb:v0.4.5
      sizeBytes: 7877529
    - names:
      - docker.io/rancher/klipper-lb@sha256:d6780e97ac25454b56f88410b236d52572518040f11d0db5c6baaac0d2fcf860
      - docker.io/rancher/klipper-lb:v0.4.4
      sizeBytes: 5068868
    - names:
      - docker.io/rancher/klipper-lb@sha256:558dcf96bf0800d9977ef46dca18411752618cd9dd06daeb99460c0a301d0a60
      - docker.io/rancher/klipper-lb:v0.4.7
      sizeBytes: 4939041
    - names:
      - docker.io/rancher/mirrored-library-busybox@sha256:125dfcbe72a0158c16781d3ad254c0d226a6534b59cc7c2bf549cdd50c6e8989
      - docker.io/rancher/mirrored-library-busybox:1.34.1
      sizeBytes: 2000508
    - names:
      - docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79
      - docker.io/library/busybox:latest
      sizeBytes: 1920927
    - names:
      - docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893
      - docker.io/rancher/mirrored-pause:3.6
      sizeBytes: 253243
    nodeInfo:
      architecture: arm64
      bootID: b2edfc9f-332a-4a8f-b181-a89593bd732b
      containerRuntimeVersion: containerd://1.7.17-k3s1
      kernelVersion: 5.15.0-1055-raspi
      kubeProxyVersion: v1.30.2+k3s1
      kubeletVersion: v1.30.2+k3s1
      machineID: ef24a964b9bc4d42b6caafc4a6110bfa
      operatingSystem: linux
      osImage: Ubuntu 22.04.4 LTS
      systemUUID: ef24a964b9bc4d42b6caafc4a6110bfa
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      alpha.kubernetes.io/provided-node-ip: 192.168.1.80
      flannel.alpha.coreos.com/backend-data: '{"VNI":1,"VtepMAC":"fe:3d:fa:87:25:ab"}'
      flannel.alpha.coreos.com/backend-type: vxlan
      flannel.alpha.coreos.com/kube-subnet-manager: "true"
      flannel.alpha.coreos.com/public-ip: 192.168.1.80
      k3s.io/hostname: tec-kube-n3
      k3s.io/internal-ip: 192.168.1.80
      k3s.io/node-args: '["agent"]'
      k3s.io/node-config-hash: NUPA2EKVSM5EQD6IRHCFXUMEPHK235KEXMWBGGUK32JJHHJ35JCQ====
      k3s.io/node-env: '{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/04062516f863dba6fbbbb251ae40e2cc82756b587e30fc88e9659564ec85a68f","K3S_TOKEN":"********","K3S_URL":"https://tec-kube-ctlr:6443"}'
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2023-10-03T05:27:29Z"
    finalizers:
    - wrangler.cattle.io/node
    labels:
      beta.kubernetes.io/arch: arm64
      beta.kubernetes.io/instance-type: k3s
      beta.kubernetes.io/os: linux
      k3s-upgrade: "true"
      kubernetes.io/arch: arm64
      kubernetes.io/hostname: tec-kube-n3
      kubernetes.io/os: linux
      node.kubernetes.io/instance-type: k3s
      plan.upgrade.cattle.io/k3s-agent: 5123f63f9cb4caadecdca8849a38540d48ffbcdc5e26e00fe6680c01
    name: tec-kube-n3
    resourceVersion: "8984086"
    uid: 07ddd93f-21ba-40cd-9e08-ab67ed2fc799
  spec:
    podCIDR: 10.42.2.0/24
    podCIDRs:
    - 10.42.2.0/24
    providerID: k3s://tec-kube-n3
  status:
    addresses:
    - address: 192.168.1.80
      type: InternalIP
    - address: tec-kube-n3
      type: Hostname
    allocatable:
      cpu: "4"
      ephemeral-storage: "119237286614"
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      hugepages-32Mi: "0"
      hugepages-64Ki: "0"
      memory: 3880980Ki
      pods: "110"
    capacity:
      cpu: "4"
      ephemeral-storage: 122571224Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: "0"
      hugepages-32Mi: "0"
      hugepages-64Ki: "0"
      memory: 3880980Ki
      pods: "110"
    conditions:
    - lastHeartbeatTime: "2024-06-30T01:14:17Z"
      lastTransitionTime: "2024-03-12T05:43:46Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2024-06-30T01:14:17Z"
      lastTransitionTime: "2024-03-12T05:43:46Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2024-06-30T01:14:17Z"
      lastTransitionTime: "2024-03-12T05:43:46Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2024-06-30T01:14:17Z"
      lastTransitionTime: "2024-06-30T01:14:17Z"
      message: kubelet is posting ready status
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - docker.io/jenkins/jenkins@sha256:80587de2dac2bb701cd0b14d35988e591d62589fd337a4b584f4c52101fd4e3c
      - docker.io/jenkins/jenkins:lts-jdk17
      sizeBytes: 287766691
    - names:
      - quay.io/argoproj/argocd@sha256:5f200a0efcf08abfd61d28165893edc9dce48261970d3280b7faef93617a43aa
      - quay.io/argoproj/argocd:v2.10.2
      sizeBytes: 160425986
    - names:
      - docker.io/library/postgres@sha256:3faff326de0fa3713424d44f3b85993459ac1917e0a4bfd35bab9e0a58e41900
      - docker.io/library/postgres:15.4
      sizeBytes: 146536322
    - names:
      - registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c
      sizeBytes: 95119480
    - names:
      - docker.io/rancher/klipper-helm@sha256:c2fd922a9a361ac5ec7ef225a46aaaad1e79ec3acc3cf176f60cd09a11683dd5
      - docker.io/rancher/klipper-helm:v0.8.4-build20240523
      sizeBytes: 87267777
    - names:
      - docker.io/rancher/klipper-helm@sha256:b0b0c4f73f2391697edb52adffe4fc490de1c8590606024515bb906b2813554a
      - docker.io/rancher/klipper-helm:v0.8.2-build20230815
      sizeBytes: 83499502
    - names:
      - docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
      - docker.io/kubernetesui/dashboard:v2.7.0
      sizeBytes: 74084559
    - names:
      - docker.io/rancher/k3s-upgrade@sha256:cb1ba64ab333aee0c68eeaeb2be524a6b3db1da3d266f2415966e98158c19e63
      - docker.io/rancher/k3s-upgrade:v1.30.2-k3s1
      sizeBytes: 59900194
    - names:
      - docker.io/rancher/k3s-upgrade@sha256:dce0af2b0b3d38efa6636e72359163c47c3e4c00ce801552c648acd4f28f548f
      - docker.io/rancher/k3s-upgrade:v1.29.2-k3s1
      sizeBytes: 59159643
    - names:
      - registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334
      sizeBytes: 21986963
    - names:
      - docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
      - docker.io/kubernetesui/metrics-scraper:v1.0.8
      sizeBytes: 18306114
    - names:
      - docker.io/rancher/local-path-provisioner@sha256:3617a2fb32bf59d06861dd4c3cfb7ba7e66ae2a34bb5443e625fe490df463c71
      - docker.io/rancher/local-path-provisioner:v0.0.27
      sizeBytes: 17130914
    - names:
      - docker.io/library/nginx@sha256:cebb1f5bea2b13bd668d5e45790e46a07412d2622cd5a61fbba93f8b3e14832d
      - docker.io/library/nginx:stable-alpine
      sizeBytes: 16228250
    - names:
      - docker.io/rancher/local-path-provisioner@sha256:aee53cadc62bd023911e7f077877d047c5b3c269f9bba25724d558654f43cea0
      - docker.io/rancher/local-path-provisioner:v0.0.26
      sizeBytes: 15933947
    - names:
      - docker.io/kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7
      - docker.io/kubernetesui/metrics-scraper:v1.0.6
      sizeBytes: 14011739
    - names:
      - docker.io/library/redis@sha256:45de526e9fbc1a4b183957ab93a448294181fae10ced9184fc6efe9956ca0ccc
      - docker.io/library/redis:7.0.14-alpine
      sizeBytes: 13519061
    - names:
      - docker.io/rancher/kubectl@sha256:9be095ca0bbc74e8947a1d4a0258875304b590057d858eb9738de000f88a473e
      - docker.io/rancher/kubectl:v1.25.4
      sizeBytes: 13045642
    - names:
      - docker.io/rancher/klipper-lb@sha256:fa2257de248f46c303d0f39a8ebe8644ba5ac63d332c7d02bf6ee26a981243bc
      - docker.io/rancher/klipper-lb:v0.4.5
      sizeBytes: 7877529
    - names:
      - docker.io/rancher/klipper-lb@sha256:d6780e97ac25454b56f88410b236d52572518040f11d0db5c6baaac0d2fcf860
      - docker.io/rancher/klipper-lb:v0.4.4
      sizeBytes: 5068868
    - names:
      - docker.io/rancher/klipper-lb@sha256:558dcf96bf0800d9977ef46dca18411752618cd9dd06daeb99460c0a301d0a60
      - docker.io/rancher/klipper-lb:v0.4.7
      sizeBytes: 4939041
    - names:
      - docker.io/rancher/mirrored-library-busybox@sha256:125dfcbe72a0158c16781d3ad254c0d226a6534b59cc7c2bf549cdd50c6e8989
      - docker.io/rancher/mirrored-library-busybox:1.34.1
      sizeBytes: 2000508
    - names:
      - docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893
      - docker.io/rancher/mirrored-pause:3.6
      sizeBytes: 253243
    nodeInfo:
      architecture: arm64
      bootID: f1a5cecc-94de-472f-a479-737f8b9afe28
      containerRuntimeVersion: containerd://1.7.17-k3s1
      kernelVersion: 5.15.0-1055-raspi
      kubeProxyVersion: v1.30.2+k3s1
      kubeletVersion: v1.30.2+k3s1
      machineID: ef24a964b9bc4d42b6caafc4a6110bfa
      operatingSystem: linux
      osImage: Ubuntu 22.04.4 LTS
      systemUUID: ef24a964b9bc4d42b6caafc4a6110bfa
kind: List
metadata:
  resourceVersion: ""
brandond commented 2 months ago

None of these nodes are tainted, so whatever was going on previously seems to be resolved?

tim-oe commented 2 months ago

yes, for me it seems to be. i had done a os update and reboot prior to performing deployment. also i had no other services deployed and running at the time

github-actions[bot] commented 1 month ago

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 45 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

Bonn93 commented 1 month ago

Still a valid issue Mr bot.

brandond commented 1 month ago

You said it was resolved and can't provide any additional info on what had occurred. Why are we keeping it open?

brandond commented 1 month ago

@Bonn93 what are you talking about? What does security policy have to do with unschedulable nodes? 1.8 of what?

tim-oe commented 1 month ago

it ran fine for me the last time i commented, not sure about anyone else if still having issues

brandond commented 1 month ago

I'm going to close this out, if someone can provide additional information as requested above we can reopen.