Mellanox / network-operator

Mellanox Network Operator
Apache License 2.0
212 stars 54 forks source link

helm network-operator can't be upgraded #833

Closed Saigut closed 8 months ago

Saigut commented 8 months ago

What happened:

# helm upgrade network-operator nvidia/network-operator -n nvidia-network-operator -f ./network-operator-values.yaml
Error: UPGRADE FAILED: resource mapping not found for name: "nvidia-nics-rules" namespace: "" from "": no matches for kind "NodeFeatureRule" in version "nfd.k8s-sigs.io/v1alpha1"
ensure CRDs are installed first 

network-operator can't be upgraded.

What you expected to happen: network-operator succeed to be upgraded.

How to reproduce it (as minimally and precisely as possible): install network-operator v23.4.0 from https://helm.ngc.nvidia.com/nvidia (follow [here]()https://docs.nvidia.com/networking/display/cokan10/network+operator) and then to upgrade it.

Anything else we need to know?: network-operator and values.yaml version: v23.4.0

Logs:

error: Required resource not specified. Use "kubectl explain " for a detailed description of that resource (e.g. kubectl explain pods). See 'kubectl get -h' for help and examples

kubectl -n nvidia-network-operator get pods

NAME READY STATUS RESTARTS AGE network-operator-7d46676c9c-c92vq 1/1 Running 0 35m network-operator-node-feature-discovery-master-858d89495b-8prxg 1/1 Running 0 35m network-operator-node-feature-discovery-worker-c8r4w 1/1 Running 0 35m network-operator-node-feature-discovery-worker-p7b69 1/1 Running 0 35m rdma-shared-dp-ds-4pzbl 1/1 Running 0 34m rdma-shared-dp-ds-k5jm2 1/1 Running 0 34m


- Network Operator version: v23.4.0
- Logs of Network Operator controller: 
<details>
  <summary>part log</summary>
  <p>
    2024-02-26T09:56:29Z    INFO    state   Handling manifest object        {"Kind:": "ConfigMap", "Name": "rdma-devices"}
2024-02-26T09:56:29Z    INFO    state   Creating Object {"Namespace:": "nvidia-network-operator", "Name:": "rdma-devices"}
2024-02-26T09:56:29Z    INFO    state   Object Already Exists
2024-02-26T09:56:29Z    INFO    state   Get Object      {"Namespace:": "nvidia-network-operator", "Name:": "rdma-devices"}
2024-02-26T09:56:29Z    INFO    state   Updating Object {"Namespace:": "nvidia-network-operator", "Name:": "rdma-devices"}
2024-02-26T09:56:29Z    INFO    state   Object updated successfully
2024-02-26T09:56:29Z    INFO    state   Handling manifest object        {"Kind:": "DaemonSet", "Name": "rdma-shared-dp-ds"}
2024-02-26T09:56:29Z    INFO    state   Creating Object {"Namespace:": "nvidia-network-operator", "Name:": "rdma-shared-dp-ds"}
2024-02-26T09:56:29Z    INFO    state   Object Already Exists
2024-02-26T09:56:29Z    INFO    state   Get Object      {"Namespace:": "nvidia-network-operator", "Name:": "rdma-shared-dp-ds"}
2024-02-26T09:56:29Z    INFO    state   Updating Object {"Namespace:": "nvidia-network-operator", "Name:": "rdma-shared-dp-ds"}
2024-02-26T09:56:29Z    INFO    state   Object updated successfully
2024-02-26T09:56:29Z    INFO    state   Checking related object states
2024-02-26T09:56:29Z    INFO    state   Checking object {"Kind:": "ConfigMap", "Name": "rdma-devices"}
2024-02-26T09:56:29Z    INFO    state   Get Object      {"Namespace:": "nvidia-network-operator", "Name:": "rdma-devices"}
2024-02-26T09:56:29Z    INFO    state   Object is ready {"Kind:": "ConfigMap", "Name": "rdma-devices"}
2024-02-26T09:56:29Z    INFO    state   Checking object {"Kind:": "DaemonSet", "Name": "rdma-shared-dp-ds"}
2024-02-26T09:56:29Z    INFO    state   Get Object      {"Namespace:": "nvidia-network-operator", "Name:": "rdma-shared-dp-ds"}
2024-02-26T09:56:29Z    DEBUG   state   Check daemonset state   {"DesiredNodes:": 2, "CurrentNodes:": 2, "PodsAvailable:": 2, "PodsUnavailable:": 0, "UpdatedPodsScheduled": 2, "PodsReady:": 2, "Conditions:": null}
2024-02-26T09:56:29Z    INFO    state   Object is ready {"Kind:": "DaemonSet", "Name": "rdma-shared-dp-ds"}
2024-02-26T09:56:29Z    INFO    state   Sync State      {"Name:": "state-NV-Peer", "Description:": "Nvidia Peer Memory driver deployed in the cluster"}
2024-02-26T09:56:29Z    INFO    state   Sync Custom resource    {"State:": "state-NV-Peer", "Name:": "nic-cluster-policy", "Namespace:": ""}
2024-02-26T09:56:29Z    INFO    state   State spec in CR is nil, deleting existing objects if needed    {"State:": "state-NV-Peer"}
2024-02-26T09:56:29Z    DEBUG   state   syncGroup       {"results:": [{"StateName":"state-RDMA-device-plugin","Status":"ready","ErrInfo":null},{"StateName":"state-NV-Peer","Status":"ignore","ErrInfo":null}]}
2024-02-26T09:56:29Z    INFO    state   Sync Completed successfully for State group     {"index": 4}
2024-02-26T09:56:29Z    INFO    state   Sync State group        {"index": 5}
2024-02-26T09:56:29Z    INFO    state   Sync State      {"Name:": "state-ib-kubernetes", "Description:": "ib-kubernetes deployed in the cluster"}
2024-02-26T09:56:29Z    INFO    state   Sync Custom resource    {"State:": "state-ib-kubernetes", "Name:": "nic-cluster-policy", "Namespace:": ""}
2024-02-26T09:56:29Z    INFO    state   State spec in CR is nil, deleting existing objects if needed    {"State:": "state-ib-kubernetes"}
2024-02-26T09:56:30Z    DEBUG   state   syncGroup       {"results:": [{"StateName":"state-ib-kubernetes","Status":"ignore","ErrInfo":null}]}
2024-02-26T09:56:30Z    INFO    state   Sync Completed successfully for State group     {"index": 5}
2024-02-26T09:56:30Z    INFO    controllers.NicClusterPolicy    Updating status {"Custom resource name": "nic-cluster-policy", "namespace": "", "Result:": {"state":"notReady","appliedStates":[{"name":"state-pod-security-policy","state":"ignore"},{"name":"state-ipoib-cni","state":"ignore"},{"name":"state-whereabouts-cni","state":"notReady"},{"name":"state-multus-cni","state":"notReady"},{"name":"state-container-networking-plugins","state":"notReady"},{"name":"state-OFED","state":"ignore"},{"name":"state-SRIOV-device-plugin","state":"ignore"},{"name":"state-RDMA-device-plugin","state":"ready"},{"name":"state-NV-Peer","state":"ignore"},{"name":"state-ib-kubernetes","state":"ignore"}]}}
2024-02-26T09:56:35Z    INFO    controllers.NicClusterPolicy    Reconciling NicClusterPolicy    {"nicclusterpolicy": "/nic-cluster-policy"}
2024-02-26T09:56:35Z    INFO    controllers.NicClusterPolicy    Creating Node info provider     {"nicclusterpolicy": "/nic-cluster-policy"}
2024-02-26T09:56:35Z    DEBUG   controllers.NicClusterPolicy    Node info provider with {"nicclusterpolicy": "/nic-cluster-policy", "Nodes:": ["gzr750-131","gzr750-132"]}
2024-02-26T09:56:35Z    INFO    state   Syncing system state
2024-02-26T09:56:35Z    INFO    state   Sync State group        {"index": 0}
2024-02-26T09:56:35Z    INFO    state   Sync State      {"Name:": "state-pod-security-policy", "Description:": "Privileged pod security policy deployed in the cluster"}
2024-02-26T09:56:35Z    INFO    state   Sync Custom resource    {"State:": "state-pod-security-policy", "Name:": "nic-cluster-policy", "Namespace:": ""}
2024-02-26T09:56:35Z    INFO    state   State spec in CR is nil, deleting existing objects if needed    {"State:": "state-pod-security-policy"}
  </p>
</details>
- Logs of the various Pods in `nvidia-network-operator` namespace:
<details>
  <summary>part log</summary>
  <p>
    I0226 09:55:02.444197       1 nfd-worker.go:484] feature discovery completed
I0226 09:55:02.444223       1 nfd-worker.go:565] sending labeling request to nfd-master
E0226 09:56:02.533172       1 network.go:145] failed to read net iface attribute speed: read /host-sys/class/net/idrac/speed: invalid argument
I0226 09:56:02.602767       1 nfd-worker.go:472] starting feature discovery...
I0226 09:56:02.603531       1 nfd-worker.go:484] feature discovery completed
I0226 09:56:02.603551       1 nfd-worker.go:565] sending labeling request to nfd-master
E0226 09:57:02.692488       1 network.go:145] failed to read net iface attribute speed: read /host-sys/class/net/idrac/speed: invalid argument
I0226 09:57:02.765952       1 nfd-worker.go:472] starting feature discovery...
I0226 09:57:02.766731       1 nfd-worker.go:484] feature discovery completed
I0226 09:57:02.766750       1 nfd-worker.go:565] sending labeling request to nfd-master
E0226 09:58:02.857326       1 network.go:145] failed to read net iface attribute speed: read /host-sys/class/net/idrac/speed: invalid argument
I0226 09:58:02.928861       1 nfd-worker.go:472] starting feature discovery...
I0226 09:58:02.929645       1 nfd-worker.go:484] feature discovery completed
I0226 09:58:02.929663       1 nfd-worker.go:565] sending labeling request to nfd-master 
  </p>
</details>
- Helm Configuration (if applicable):
<details>
  <summary>conf</summary>
  <p>
    # Copyright 2020 NVIDIA
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Default values for network-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

nfd:
  enabled: true

psp:
  enabled: false

sriovNetworkOperator:
  enabled: false
  # inject additional values to nodeSelector for config daemon
  configDaemonNodeSelectorExtra:
    node-role.kubernetes.io/worker: ""

# Node Feature discovery chart related values
node-feature-discovery:
  image:
    pullPolicy: IfNotPresent
  nodeFeatureRule:
    createCRD: false
  master:
    instance: "nvidia.networking"
  worker:
    tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Equal"
        value: ""
        effect: "NoSchedule"
      - key: "node-role.kubernetes.io/control-plane"
        operator: "Equal"
        value: ""
        effect: "NoSchedule"
      - key: "nvidia.com/gpu"
        operator: "Equal"
        value: "present"
        effect: "NoSchedule"
    config:
      sources:
        pci:
          deviceClassWhitelist:
            - "02"
            - "0200"
            - "0207"
          deviceLabelFields:
            - vendor

# SR-IOV Network Operator chart related values
sriov-network-operator:
  operator:
    tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"
      - key: "node-role.kubernetes.io/control-plane"
        operator: "Exists"
        effect: "NoSchedule"
    nodeSelector: {}
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: "node-role.kubernetes.io/master"
                  operator: In
                  values: [ "" ]
            - matchExpressions:
                - key: "node-role.kubernetes.io/control-plane"
                  operator: In
                  values: [ "" ]
    nameOverride: ""
    fullnameOverride: ""
    resourcePrefix: "nvidia.com"
    enableAdmissionController: false
    cniBinPath: "/opt/cni/bin"
    clusterType: "kubernetes"

  # Image URIs for sriov-network-operator components
  images:
    operator: m.daocloud.io/nvcr.io/nvidia/mellanox/sriov-network-operator:network-operator-23.4.0
    sriovConfigDaemon: m.daocloud.io/nvcr.io/nvidia/mellanox/sriov-network-operator-config-daemon:network-operator-23.4.0
    sriovCni: m.daocloud.io/ghcr.io/k8snetworkplumbingwg/sriov-cni:v2.7.0
    ibSriovCni:  m.daocloud.io/ghcr.io/k8snetworkplumbingwg/ib-sriov-cni:v1.0.3
    sriovDevicePlugin: m.daocloud.io/ghcr.io/k8snetworkplumbingwg/sriov-network-device-plugin:v3.5.1
    resourcesInjector: m.daocloud.io/ghcr.io/k8snetworkplumbingwg/network-resources-injector:v1.4
    webhook: m.daocloud.io/ghcr.io/k8snetworkplumbingwg/sriov-network-operator-webhook:v1.1.0

# General Operator related values
# The operator element allows to deploy network operator from an alternate location
operator:
  tolerations:
    - key: "node-role.kubernetes.io/master"
      operator: "Equal"
      value: ""
      effect: "NoSchedule"
    - key: "node-role.kubernetes.io/control-plane"
      operator: "Equal"
      value: ""
      effect: "NoSchedule"
  nodeSelector: {}
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          preference:
            matchExpressions:
              - key: "node-role.kubernetes.io/master"
                operator: In
                values: [""]
        - weight: 1
          preference:
            matchExpressions:
              - key: "node-role.kubernetes.io/control-plane"
                operator: In
                values: [ "" ]
  repository: m.daocloud.io/nvcr.io/nvidia/cloud-native
  image: network-operator
  # imagePullSecrets: []
  nameOverride: ""
  fullnameOverride: ""
  # tag, if defined will use the given image tag, else Chart.AppVersion will be used
  # tag

imagePullSecrets: []

# NicClusterPolicy CR values:
deployCR: true
ofedDriver:
  deploy: false
  image: mofed
  repository: m.daocloud.io/nvcr.io/nvidia/mellanox
  version: 23.04-0.5.3.3.1
  # imagePullSecrets: []
  # env, if defined will pass environment variables to the OFED container
  # env:
  #   - name: EXAMPLE_ENV_VAR
  #     value: example_env_var_value
  terminationGracePeriodSeconds: 300
  # Private mirror repository configuration
  repoConfig:
    name: ""
  # Custom ssl key/certificate configuration
  certConfig:
    name: ""

  startupProbe:
    initialDelaySeconds: 10
    periodSeconds: 20
  livenessProbe:
    initialDelaySeconds: 30
    periodSeconds: 30
  readinessProbe:
    initialDelaySeconds: 10
    periodSeconds: 30
  upgradePolicy:
    # global switch for automatic upgrade feature
    # if set to false all other options are ignored
    autoUpgrade: false
    # how many nodes can be upgraded in parallel (default: 1)
    # 0 means no limit, all nodes will be upgraded in parallel
    maxParallelUpgrades: 1
    # options for node drain (`kubectl drain`) before the driver reload
    # if auto upgrade is enabled but drain.enable is false,
    # then driver POD will be reloaded immediately without
    # removing PODs from the node
    drain:
      enable: true
      force: false
      podSelector: ""
      # It's recommended to set a timeout to avoid infinite drain in case non-fatal error keeps happening on retries
      timeoutSeconds: 300
      deleteEmptyDir: false

nvPeerDriver:
  deploy: false
  image: nv-peer-mem-driver
  repository: mellanox
  version: 1.1-0
  # imagePullSecrets: []
  gpuDriverSourcePath: /run/nvidia/driver

rdmaSharedDevicePlugin:
  deploy: true
  image: k8s-rdma-shared-dev-plugin
  repository: m.daocloud.io/nvcr.io/nvidia/cloud-native
  version: v1.3.2
  # imagePullSecrets: []
  # The following defines the RDMA resources in the cluster
  # it must be provided by the user when deploying the chart
  # each entry in the resources element will create a resource with the provided <name> and list of devices
  # example:
  resources:
    - name: rdma_shared_device_a
      vendors: [15b3]

sriovDevicePlugin:
  deploy: false
  image: sriov-network-device-plugin
  repository: m.daocloud.io/ghcr.io/k8snetworkplumbingwg
  version: v3.5.1
  # imagePullSecrets: []
  resources:
    - name: hostdev
      vendors: [15b3]

ibKubernetes:
  deploy: false
  image: ib-kubernetes
  repository: m.daocloud.io/ghcr.io/mellanox
  version: v1.0.2
  # imagePullSecrets: []
  periodicUpdateSeconds: 5
  pKeyGUIDPoolRangeStart: "02:00:00:00:00:00:00:00"
  pKeyGUIDPoolRangeEnd: "02:FF:FF:FF:FF:FF:FF:FF"
  ufmSecret: # specify the secret name here

secondaryNetwork:
  deploy: true
  cniPlugins:
    deploy: true
    image: plugins
    repository: m.daocloud.io/ghcr.io/k8snetworkplumbingwg
    version: v1.2.0-amd64
    # imagePullSecrets: []
  multus:
    deploy: true
    image: multus-cni
    repository: m.daocloud.io/ghcr.io/k8snetworkplumbingwg
    version: v3.9.3
    # imagePullSecrets: []
    config: ''
  ipoib:
    deploy: false
    image: ipoib-cni
    repository: m.daocloud.io/nvcr.io/nvidia/cloud-native
    version: v1.1.0
    # imagePullSecrets: []
  ipamPlugin:
    deploy: true
    image: whereabouts
    repository: m.daocloud.io/ghcr.io/k8snetworkplumbingwg
    version: v0.6.1-amd64
    # imagePullSecrets: []

# Can be set to nicclusterpolicy and override other ds node affinity,
# e.g. https://github.com/Mellanox/network-operator/blob/master/manifests/stage-multus-cni/0050-multus-ds.yml#L26-L36
#nodeAffinity:

test:
  pf: ens2f0
  </p>
</details>
- Kubernetes' nodes information (labels, annotations and status): `kubectl get node -o yaml`:
<details>
  <summary>info</summary>
  <p>
    # kubectl get node -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      alpha.kubernetes.io/provided-node-ip: 172.40.20.131
      cluster.x-k8s.io/cluster-name: k8s
      cluster.x-k8s.io/cluster-namespace: fleet-default
      cluster.x-k8s.io/labels-from-machine: ""
      cluster.x-k8s.io/machine: custom-6d27da35be9b
      csi.volume.kubernetes.io/nodeid: '{"rook-ceph.cephfs.csi.ceph.com":"gzr750-131","rook-ceph.rbd.csi.ceph.com":"gzr750-131"}'
      etcd.rke2.cattle.io/local-snapshots-timestamp: "2024-02-26T15:00:04+08:00"
      etcd.rke2.cattle.io/node-address: 172.40.20.131
      etcd.rke2.cattle.io/node-name: gzr750-131-1d2a35e4
      management.cattle.io/pod-limits: '{"cpu":"9750m","memory":"13180Mi"}'
      management.cattle.io/pod-requests: '{"cpu":"5575m","memory":"8096Mi","pods":"32"}'
      node.alpha.kubernetes.io/ttl: "0"
      nvidia.networking.nfd.node.kubernetes.io/extended-resources: ""
      nvidia.networking.nfd.node.kubernetes.io/feature-labels: cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BITALG,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512IFMA,cpu-cpuid.AVX512VBMI,cpu-cpuid.AVX512VBMI2,cpu-cpuid.AVX512VL,cpu-cpuid.AVX512VNNI,cpu-cpuid.AVX512VPOPCNTDQ,cpu-cpuid.FMA3,cpu-cpuid.GFNI,cpu-cpuid.IBPB,cpu-cpuid.SHA,cpu-cpuid.STIBP,cpu-cpuid.VAES,cpu-cpuid.VMX,cpu-cpuid.VPCLMULQDQ,cpu-cpuid.WBNOINVD,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,custom-rdma.available,custom-rdma.capable,kernel-config.NO_HZ,kernel-config.NO_HZ_IDLE,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,pci-14e4.present,pci-15b3.present,pci-8086.present,pci-8086.sriov.capable,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major,system-os_release.VERSION_ID.minor
      nvidia.networking.nfd.node.kubernetes.io/master.version: v0.10.1
      nvidia.networking.nfd.node.kubernetes.io/worker.version: v0.10.1
      projectcalico.org/IPv4Address: 172.40.20.131/24
      projectcalico.org/IPv4VXLANTunnelAddr: 10.42.22.64
      rke2.io/encryption-config-hash: start-2e5624c94b57a5493386e7be1be19db400e9a44867e93b63461bb6600e0c9e3d
      rke2.io/hostname: gzr750-131
      rke2.io/internal-ip: 172.40.20.131
      rke2.io/node-args: '["server","--agent-token","********","--cni","calico","--disable-kube-proxy","false","--etcd-expose-metrics","false","--etcd-snapshot-retention","5","--etcd-snapshot-schedule-cron","0
        */5 * * *","--kube-controller-manager-arg","cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager","--kube-controller-manager-arg","secure-port=10257","--kube-controller-manager-extra-mount","/var/lib/rancher/rke2/server/tls/kube-controller-manager:/var/lib/rancher/rke2/server/tls/kube-controller-manager","--kube-scheduler-arg","cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler","--kube-scheduler-arg","secure-port=10259","--kube-scheduler-extra-mount","/var/lib/rancher/rke2/server/tls/kube-scheduler:/var/lib/rancher/rke2/server/tls/kube-scheduler","--node-label","cattle.io/os=linux","--node-label","rke.cattle.io/machine=45a1eda7-088d-4f76-afde-cd38aee88bfe","--private-registry","/etc/rancher/rke2/registries.yaml","--protect-kernel-defaults","false","--system-default-registry","registry.cn-hangzhou.aliyuncs.com","--token","********"]'
      rke2.io/node-config-hash: 73NT3WNLYZ7SCXDBGL7ZVUPANHWZJBPIARJ2KBH5S42VEME5RJQA====
      rke2.io/node-env: '{}'
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2023-12-21T06:23:57Z"
    finalizers:
    - wrangler.cattle.io/node
    - wrangler.cattle.io/managed-etcd-controller
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/instance-type: rke2
      beta.kubernetes.io/os: linux
      cattle.io/os: linux
      feature.node.kubernetes.io/cpu-cpuid.ADX: "true"
      feature.node.kubernetes.io/cpu-cpuid.AESNI: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX2: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512BITALG: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512BW: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512CD: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512DQ: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512F: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512IFMA: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512VBMI: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512VBMI2: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512VL: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512VNNI: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512VPOPCNTDQ: "true"
      feature.node.kubernetes.io/cpu-cpuid.FMA3: "true"
      feature.node.kubernetes.io/cpu-cpuid.GFNI: "true"
      feature.node.kubernetes.io/cpu-cpuid.IBPB: "true"
      feature.node.kubernetes.io/cpu-cpuid.SHA: "true"
      feature.node.kubernetes.io/cpu-cpuid.STIBP: "true"
      feature.node.kubernetes.io/cpu-cpuid.VAES: "true"
      feature.node.kubernetes.io/cpu-cpuid.VMX: "true"
      feature.node.kubernetes.io/cpu-cpuid.VPCLMULQDQ: "true"
      feature.node.kubernetes.io/cpu-cpuid.WBNOINVD: "true"
      feature.node.kubernetes.io/cpu-cstate.enabled: "true"
      feature.node.kubernetes.io/cpu-hardware_multithreading: "true"
      feature.node.kubernetes.io/cpu-pstate.status: passive
      feature.node.kubernetes.io/cpu-pstate.turbo: "true"
      feature.node.kubernetes.io/cpu-rdt.RDTCMT: "true"
      feature.node.kubernetes.io/cpu-rdt.RDTL3CA: "true"
      feature.node.kubernetes.io/cpu-rdt.RDTMBA: "true"
      feature.node.kubernetes.io/cpu-rdt.RDTMBM: "true"
      feature.node.kubernetes.io/cpu-rdt.RDTMON: "true"
      feature.node.kubernetes.io/custom-rdma.available: "true"
      feature.node.kubernetes.io/custom-rdma.capable: "true"
      feature.node.kubernetes.io/kernel-config.NO_HZ: "true"
      feature.node.kubernetes.io/kernel-config.NO_HZ_IDLE: "true"
      feature.node.kubernetes.io/kernel-version.full: 6.2.0-39-generic
      feature.node.kubernetes.io/kernel-version.major: "6"
      feature.node.kubernetes.io/kernel-version.minor: "2"
      feature.node.kubernetes.io/kernel-version.revision: "0"
      feature.node.kubernetes.io/memory-numa: "true"
      feature.node.kubernetes.io/pci-14e4.present: "true"
      feature.node.kubernetes.io/pci-15b3.present: "true"
      feature.node.kubernetes.io/pci-8086.present: "true"
      feature.node.kubernetes.io/pci-8086.sriov.capable: "true"
      feature.node.kubernetes.io/storage-nonrotationaldisk: "true"
      feature.node.kubernetes.io/system-os_release.ID: ubuntu
      feature.node.kubernetes.io/system-os_release.VERSION_ID: "22.04"
      feature.node.kubernetes.io/system-os_release.VERSION_ID.major: "22"
      feature.node.kubernetes.io/system-os_release.VERSION_ID.minor: "04"
      kubernetes.io/arch: amd64
      kubernetes.io/hostname: gzr750-131
      kubernetes.io/os: linux
      network.nvidia.com/operator.mofed.wait: "false"
      node-role.kubernetes.io/control-plane: "true"
      node-role.kubernetes.io/etcd: "true"
      node-role.kubernetes.io/master: "true"
      node-role.kubernetes.io/worker: "true"
      node.kubernetes.io/instance-type: rke2
      plan.upgrade.cattle.io/system-agent-upgrader: d3afd4eb884edc7a77db901446479abc45b155929a9d0ef1cb138405
      rke.cattle.io/machine: 45a1eda7-088d-4f76-afde-cd38aee88bfe
    name: gzr750-131
    resourceVersion: "42232404"
    uid: f829091a-e7b6-49a1-8bf9-a7e6bb120c11
  spec:
    podCIDR: 10.42.0.0/24
    podCIDRs:
    - 10.42.0.0/24
    providerID: rke2://gzr750-131
  status:
    addresses:
    - address: 172.40.20.131
      type: InternalIP
    - address: gzr750-131
      type: Hostname
    allocatable:
      cpu: "64"
      ephemeral-storage: "444704258519"
      hugepages-1Gi: "0"
      hugepages-2Mi: 204000Mi
      memory: 318888900Ki
      pods: "110"
      rdma/rdma_shared_device_a: 1k
    capacity:
      cpu: "64"
      ephemeral-storage: 457138424Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: 204000Mi
      memory: 527784900Ki
      pods: "110"
      rdma/rdma_shared_device_a: 1k
    conditions:
    - lastHeartbeatTime: "2024-01-06T16:03:32Z"
      lastTransitionTime: "2024-01-06T16:03:32Z"
      message: Calico is running on this node
      reason: CalicoIsUp
      status: "False"
      type: NetworkUnavailable
    - lastHeartbeatTime: "2024-02-26T09:54:48Z"
      lastTransitionTime: "2023-12-29T07:17:01Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2024-02-26T09:54:48Z"
      lastTransitionTime: "2023-12-29T07:17:01Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2024-02-26T09:54:48Z"
      lastTransitionTime: "2023-12-29T07:17:01Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2024-02-26T09:54:48Z"
      lastTransitionTime: "2024-01-24T01:58:19Z"
      message: kubelet is posting ready status. AppArmor enabled
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - m.daocloud.io/quay.io/cephcsi/cephcsi@sha256:5dd50ad6f3f9a1e8c8186fde0048ea241f056ca755acbeab42f5ebf723313e9c
      - quay.io/cephcsi/cephcsi@sha256:5dd50ad6f3f9a1e8c8186fde0048ea241f056ca755acbeab42f5ebf723313e9c
      - m.daocloud.io/quay.io/cephcsi/cephcsi:v3.10.1
      - quay.io/cephcsi/cephcsi:v3.10.1
      sizeBytes: 746084802
    - names:
      - docker.io/rancher/rancher-agent@sha256:8265848ee065fac0e20774aec497ce3ee3c421774e20b312894c0390bd5759ec
      - registry.cn-hangzhou.aliyuncs.com/rancher/rancher-agent@sha256:926154282389fbf70a21ccdcf690561655136f7b287357d860eb637752f9c304
      - docker.io/rancher/rancher-agent:v2.8.0
      - registry.cn-hangzhou.aliyuncs.com/rancher/rancher-agent:v2.8.0
      sizeBytes: 592215370
    - names:
      - docker.io/rook/ceph@sha256:bf7833f0b3a65a71be36c7a87b83fb22b5df78dba058e4401169cdabe0b09e05
      - m.daocloud.io/docker.io/rook/ceph@sha256:bf7833f0b3a65a71be36c7a87b83fb22b5df78dba058e4401169cdabe0b09e05
      - docker.io/rook/ceph:v1.13.1
      - m.daocloud.io/docker.io/rook/ceph:v1.13.1
      sizeBytes: 467728574
    - names:
      - m.daocloud.io/docker.io/rook/ceph@sha256:637c29fe303bb32403838712a44bb54d84d2542f72dc1fd5ec17e31eec31f830
      - m.daocloud.io/docker.io/rook/ceph:master
      sizeBytes: 467669462
    - names:
      - quay.io/ceph/ceph@sha256:e40c19cd70e047d14d70f5ec3cf501da081395a670cd59ca881ff56119660c8f
      - quay.io/ceph/ceph:v17.2.6
      sizeBytes: 447961121
    - names:
      - m.daocloud.io/quay.io/ceph/ceph@sha256:aca35483144ab3548a7f670db9b79772e6fc51167246421c66c0bd56a6585468
      - m.daocloud.io/quay.io/ceph/ceph:v18.2.1
      sizeBytes: 446773193
    - names:
      - m.daocloud.io/docker.io/rook/ceph@sha256:3fd9ea4b7da18d36a87674b6a3420689ccacfabe2d80aa17443b09d9ad34ac98
      - m.daocloud.io/docker.io/rook/ceph:v1.12.10
      sizeBytes: 437800570
    - names:
      - docker.io/rancher/nginx-ingress-controller@sha256:40b389fcbfc019e1adf2e6aa9b1a75235455a2e78fcec3261f867064afd801cb
      - registry.cn-hangzhou.aliyuncs.com/rancher/nginx-ingress-controller@sha256:572f459ba4a8b1f842887af30c0955a0fd7bd446a3ae914047eb903afdbb8d52
      - docker.io/rancher/nginx-ingress-controller:nginx-1.9.3-hardened1
      - registry.cn-hangzhou.aliyuncs.com/rancher/nginx-ingress-controller:nginx-1.9.3-hardened1
      sizeBytes: 334038552
    - names:
      - docker.io/curve2operator/curve-operator@sha256:d8da167dfc74d91b5ec20a3b19400a4565fd6d3e64e50491868d926f00e8f9ec
      - docker.io/curve2operator/curve-operator:v1.0.6
      sizeBytes: 285545670
    - names:
      - docker.io/rancher/hardened-kubernetes@sha256:16b40fc0970abb145eee8185aaae280d9ca6a2f01e412c4df675c8017ccd4357
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-kubernetes@sha256:154a46c8fc1fb6de02247c56b37a76fb8f3f3ddbf206d5c1084cc409c214f233
      - docker.io/rancher/hardened-kubernetes:v1.27.7-rke2r2-build20231102
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-kubernetes:v1.27.7-rke2r2-build20231102
      sizeBytes: 217546532
    - names:
      - docker.io/rancherlabs/swiss-army-knife@sha256:af25a3ace6269adb9e494b693644bc2f897ec872076d78f78bc5ded69f2ee222
      - docker.io/rancherlabs/swiss-army-knife:latest
      sizeBytes: 182366922
    - names:
      - docker.io/rancher/shell@sha256:098c29e11ae9bd5ef8e58401a2892aae7491f71bc2e02ce211fe67d8544b35f9
      - registry.cn-hangzhou.aliyuncs.com/rancher/shell@sha256:a1aa614bfb5288627a58fc85226402bd38dd574ed1eef7012aa29d0bf5ae19d8
      - docker.io/rancher/shell:v0.1.22
      - registry.cn-hangzhou.aliyuncs.com/rancher/shell:v0.1.22
      sizeBytes: 121278059
    - names:
      - docker.io/rancher/fleet-agent@sha256:2f989b745c8dab134149c76ae38d03cbee16184a7c094edfbff8e75dfec88e60
      - registry.cn-hangzhou.aliyuncs.com/rancher/fleet-agent@sha256:d8b7e23414587244eb9db46af84493a1c1019648818fd4bc6b59e16d9ff9b4f4
      - docker.io/rancher/fleet-agent:v0.9.0
      - registry.cn-hangzhou.aliyuncs.com/rancher/fleet-agent:v0.9.0
      sizeBytes: 116079347
    - names:
      - docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-cni@sha256:86779fab56f3c0c51abcae6d5c5d712f54ed86b50eebf83e54b8c80fdcb4a76e
      - docker.io/rancher/mirrored-calico-cni:v3.26.1
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-cni:v3.26.1
      sizeBytes: 93375345
    - names:
      - docker.io/rancher/klipper-helm@sha256:b0b0c4f73f2391697edb52adffe4fc490de1c8590606024515bb906b2813554a
      - registry.cn-hangzhou.aliyuncs.com/rancher/klipper-helm@sha256:47123689197706833e651d0743687fa99abb61d7bef1d47a4fdd1e7b3a99729e
      - docker.io/rancher/klipper-helm:v0.8.2-build20230815
      - registry.cn-hangzhou.aliyuncs.com/rancher/klipper-helm:v0.8.2-build20230815
      sizeBytes: 90876370
    - names:
      - docker.io/rancher/mirrored-calico-node@sha256:65dfcd7a75c52ae5303af4b03d24d522133d7ec1135e1855ea9ee35ebec33ce2
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-node@sha256:9459d1b2831955120fdf0037e6816b21e5d88dd11110d6d89398e5ef53cdf54c
      - docker.io/rancher/mirrored-calico-node:v3.26.1
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-node:v3.26.1
      sizeBytes: 86591907
    - names:
      - docker.io/rancher/rke2-cloud-provider@sha256:e1383c853e75a46ab2eeeec4a0808140289d789bfe52ff283abf572d1b8c73fa
      - registry.cn-hangzhou.aliyuncs.com/rancher/rke2-cloud-provider@sha256:a125362d1311d2c14df3d98aafbcff0ea07dcce14684821e8e39436f891f690a
      - docker.io/rancher/rke2-cloud-provider:v1.28.2-build20231016
      - registry.cn-hangzhou.aliyuncs.com/rancher/rke2-cloud-provider:v1.28.2-build20231016
      sizeBytes: 68010954
    - names:
      - docker.io/rancher/hardened-etcd@sha256:c4d25c075d5d61b1860ae5496d1acc8f88dd3a8be6024b37207901da744efa08
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-etcd@sha256:61e610a7e0489b2a590e7f1c6dc7d1c992ce96d149517bb3f8e99eb3aeb1e42a
      - docker.io/rancher/hardened-etcd:v3.5.9-k3s1-build20230802
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-etcd:v3.5.9-k3s1-build20230802
      sizeBytes: 64400998
    - names:
      - docker.io/rancher/hardened-coredns@sha256:3bbaf490bb8cd2d5582f6873e223bb2acec83cbcef88b398871f27a88ee1f820
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-coredns@sha256:b111e041ebb8d1cb165fd89ae418cc92f903928164626236cb66d8ff1b273308
      - docker.io/rancher/hardened-coredns:v1.10.1-build20230607
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-coredns:v1.10.1-build20230607
      sizeBytes: 64396462
    - names:
      - docker.io/rancher/hardened-k8s-metrics-server@sha256:98ce451bbe5ce332a93003aeeaf9da151404ba8a02283dacb6e464de40f22afd
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-k8s-metrics-server@sha256:2854f84d96926b3782b3bec96a029e347c6cec3e458fe51333adbf168f2f0353
      - docker.io/rancher/hardened-k8s-metrics-server:v0.6.3-build20230607
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-k8s-metrics-server:v0.6.3-build20230607
      sizeBytes: 62759583
    - names:
      - k8s.gcr.io/nfd/node-feature-discovery:v0.10.1
      - m.daocloud.io/k8s.gcr.io/nfd/node-feature-discovery:v0.10.1
      sizeBytes: 60200770
    - names:
      - docker.io/rancher/hardened-cluster-autoscaler@sha256:462d646604da3600521bff37608e1c03af322c30983c97c039fdc4afb7b69836
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-cluster-autoscaler@sha256:2e5500de74ebc42ba50c243df1305eaada5d37936202e55bdb63924c25b0f2c4
      - docker.io/rancher/hardened-cluster-autoscaler:v1.8.6-build20230609
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-cluster-autoscaler:v1.8.6-build20230609
      sizeBytes: 58304503
    - names:
      - m.daocloud.io/nvcr.io/nvidia/cloud-native/k8s-rdma-shared-dev-plugin@sha256:941ad9ff5013e9e7ad5abeb0ea9f79d45379cfae88a628d923f87d2259bdd132
      - m.daocloud.io/nvcr.io/nvidia/cloud-native/k8s-rdma-shared-dev-plugin:v1.3.2
      sizeBytes: 57690263
    - names:
      - m.daocloud.io/nvcr.io/nvidia/cloud-native/network-operator@sha256:57c0cc8ae2fb39d455b006160cdcc4775623ece0c138f5c2981f99829fb370ba
      - nvcr.io/nvidia/cloud-native/network-operator@sha256:57c0cc8ae2fb39d455b006160cdcc4775623ece0c138f5c2981f99829fb370ba
      - m.daocloud.io/nvcr.io/nvidia/cloud-native/network-operator:v23.4.0
      - nvcr.io/nvidia/cloud-native/network-operator:v23.4.0
      sizeBytes: 40778535
    - names:
      - docker.io/rancher/mirrored-calico-kube-controllers@sha256:5e0df66c5028b9cab397e44970f747023d0fe5f9162c95920689248650f8a6d6
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-kube-controllers@sha256:2c5526ad8cd69740448207b90f4077fd68a5d2e922014e32141b38a529295c55
      - docker.io/rancher/mirrored-calico-kube-controllers:v3.26.1
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-kube-controllers:v3.26.1
      sizeBytes: 32799621
    - names:
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-provisioner@sha256:49b94f975603d85a1820b72b1188e5b351d122011b3e5351f98c49d72719aa78
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-provisioner:v3.6.2
      sizeBytes: 28685505
    - names:
      - docker.io/rancher/mirrored-calico-typha@sha256:47b7aa28a2e5bfd847be633aa5a825e5b5489b4e333213036c468c6141debc93
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-typha@sha256:7df19465017798019c8b2c0137b9a12ea288373d07b6d78513b4ac6e84513cbc
      - docker.io/rancher/mirrored-calico-typha:v3.26.1
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-typha:v3.26.1
      sizeBytes: 28261696
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rancher/system-agent@sha256:f0b9e2f3f6507c76be2f3ee407efddf5e25853c04284abb5c10ff1e323cbbd48
      - registry.cn-hangzhou.aliyuncs.com/rancher/system-agent:v0.3.4-suc
      sizeBytes: 27723618
    - names:
      - docker.io/library/ubuntu@sha256:f2034e7195f61334e6caff6ecf2e965f92d11e888309065da85ff50c617732b8
      - docker.io/library/ubuntu:20.04
      sizeBytes: 27516629
    - names:
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-resizer@sha256:e998f22243869416f9860fc6a1fb07d4202eac8846defc1b85ebd015c1207605
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-resizer:v1.9.2
      sizeBytes: 27017242
    - names:
      - docker.io/rancher/rancher-webhook@sha256:51e183d64c785f1f4d2b67912c10960e28547959366ad3f8bb69af43cd0bf5bb
      - registry.cn-hangzhou.aliyuncs.com/rancher/rancher-webhook@sha256:38c197037e177c8d59a34f3f6da53e5a43f60035fefe136504cb85c06a72b273
      - docker.io/rancher/rancher-webhook:v0.4.2
      - registry.cn-hangzhou.aliyuncs.com/rancher/rancher-webhook:v0.4.2
      sizeBytes: 26810374
    - names:
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-snapshotter@sha256:4c5a1b57e685b2631909b958487f65af7746361346fcd82a8635bea3ef14509d
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2
      sizeBytes: 26802941
    - names:
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-attacher@sha256:11b955fe4da278aa0e8ca9d6fd70758f2aec4b0c1e23168c665ca345260f1882
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-attacher:v4.4.2
      sizeBytes: 26688379
    - names:
      - docker.io/rancher/mirrored-sig-storage-snapshot-controller@sha256:8776214c491da926a9a808b4ad832c297262defeb2d736240ebed4be8d9f3512
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-sig-storage-snapshot-controller@sha256:f4243b1e085aa88bdacfac66278787e2b832ac4e051945707eedf61da59f8fb9
      - docker.io/rancher/mirrored-sig-storage-snapshot-controller:v6.2.1
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-sig-storage-snapshot-controller:v6.2.1
      sizeBytes: 24225718
    - names:
      - docker.io/rancher/mirrored-calico-operator@sha256:5d91bf2448b434e42f074e096f6f433fcd0e41c9a4823afdeb8bfa4195196ba9
      - docker.io/rancher/mirrored-calico-operator:v1.30.4
      sizeBytes: 21216633
    - names:
      - docker.io/rancher/mirrored-sig-storage-snapshot-validation-webhook@sha256:5eb55a850de857d72bc5827aed89230b61eb309e1ab1c5bbf0c3c48ad7a6a679
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-sig-storage-snapshot-validation-webhook@sha256:226699f835a501ac77b133ba0ab6f67d5e8760a82b6bdf0f7403edd6c97f92cb
      - docker.io/rancher/mirrored-sig-storage-snapshot-validation-webhook:v6.2.2
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-sig-storage-snapshot-validation-webhook:v6.2.2
      sizeBytes: 21061696
    - names:
      - docker.io/rancher/mirrored-ingress-nginx-kube-webhook-certgen@sha256:25af4d737af79a08200df23208de4fa613efd2daba6801b559447f1f6b048714
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen@sha256:22bf4e148e63d2aabb047281aa60f1a5c1ddfce73361907353e660330aaf441a
      - docker.io/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
      sizeBytes: 20110594
    - names:
      - ghcr.io/kastenhq/kubestr@sha256:a44ab25c23a6c936c57b3ef33781e5dad5f78f4a01c005c4779c01dc3d01d07e
      - ghcr.io/kastenhq/kubestr:latest
      sizeBytes: 16943842
    - names:
      - docker.io/rancher/local-path-provisioner@sha256:ae8bbedd61a2c1d12381e837751a0f69bbf13ce7cbd5808b586a92232579393d
      - docker.io/rancher/local-path-provisioner:v0.0.25
      sizeBytes: 15763373
    - names:
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:2cddcc716c1930775228d56b0d2d339358647629701047edfdad5fcdfaf4ebcb
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1
      sizeBytes: 10755082
    - names:
      - docker.io/rancher/system-upgrade-controller@sha256:c730c4ec8dc914b94be13df77d9b58444277330a2bdf39fe667beb5af2b38c0b
      - registry.cn-hangzhou.aliyuncs.com/rancher/system-upgrade-controller@sha256:7e9e847f5fdfd0825265c1da2157a04c6d22dd1a1597eb96128807bf27ce924d
      - docker.io/rancher/system-upgrade-controller:v0.13.1
      - registry.cn-hangzhou.aliyuncs.com/rancher/system-upgrade-controller:v0.13.1
      sizeBytes: 10739904
    - names:
      - docker.io/rancher/mirrored-calico-pod2daemon-flexvol@sha256:f490933d59c85bfb33530b762aa8040d9810e2da1c2fb3e039118bfaed2de14c
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-pod2daemon-flexvol@sha256:1f99e783eaef47c62c53f0090b0eba5d0e9a43674fb5faba3ed6041cc5a0ecb5
      - docker.io/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1
      sizeBytes: 7289478
    - names:
      - docker.io/rancher/mirrored-pause@sha256:74c4244427b7312c5b901fe0f67cbc53683d06f4f24c6faee65d4182bf0fa893
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-pause@sha256:74bf6fc6be13c4ec53a86a5acf9fdbc6787b176db0693659ad6ac89f115e182c
      - docker.io/rancher/mirrored-pause:3.6
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-pause:3.6
      sizeBytes: 297944
    nodeInfo:
      architecture: amd64
      bootID: 4bb0cd93-bef9-435a-b7ba-2e195435d99b
      containerRuntimeVersion: containerd://1.7.7-k3s1
      kernelVersion: 6.2.0-39-generic
      kubeProxyVersion: v1.27.7+rke2r2
      kubeletVersion: v1.27.7+rke2r2
      machineID: bd0d0edfb09d49b5b49efcbe948d9cac
      operatingSystem: linux
      osImage: Ubuntu 22.04.3 LTS
      systemUUID: 4c4c4544-0044-4b10-8031-b5c04f595733
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      alpha.kubernetes.io/provided-node-ip: 172.40.20.132
      cluster.x-k8s.io/cluster-name: k8s
      cluster.x-k8s.io/cluster-namespace: fleet-default
      cluster.x-k8s.io/labels-from-machine: ""
      cluster.x-k8s.io/machine: custom-75753586cd8f
      csi.volume.kubernetes.io/nodeid: '{"rook-ceph.cephfs.csi.ceph.com":"gzr750-132","rook-ceph.rbd.csi.ceph.com":"gzr750-132"}'
      etcd.rke2.cattle.io/local-snapshots-timestamp: "2024-02-26T15:00:05+08:00"
      etcd.rke2.cattle.io/node-address: 172.40.20.132
      etcd.rke2.cattle.io/node-name: -e2f05272
      management.cattle.io/pod-limits: '{"cpu":"68650m","memory":"130052Mi"}'
      management.cattle.io/pod-requests: '{"cpu":"12550m","memory":"30492Mi","pods":"30"}'
      node.alpha.kubernetes.io/ttl: "0"
      nvidia.networking.nfd.node.kubernetes.io/extended-resources: ""
      nvidia.networking.nfd.node.kubernetes.io/feature-labels: cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BITALG,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512IFMA,cpu-cpuid.AVX512VBMI,cpu-cpuid.AVX512VBMI2,cpu-cpuid.AVX512VL,cpu-cpuid.AVX512VNNI,cpu-cpuid.AVX512VPOPCNTDQ,cpu-cpuid.FMA3,cpu-cpuid.GFNI,cpu-cpuid.IBPB,cpu-cpuid.SHA,cpu-cpuid.STIBP,cpu-cpuid.VAES,cpu-cpuid.VMX,cpu-cpuid.VPCLMULQDQ,cpu-cpuid.WBNOINVD,cpu-hardware_multithreading,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,custom-rdma.available,custom-rdma.capable,kernel-config.NO_HZ,kernel-config.NO_HZ_IDLE,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,pci-14e4.present,pci-15b3.present,pci-8086.present,pci-8086.sriov.capable,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major,system-os_release.VERSION_ID.minor
      nvidia.networking.nfd.node.kubernetes.io/worker.version: v0.10.1
      projectcalico.org/IPv4Address: 172.40.20.132/24
      projectcalico.org/IPv4VXLANTunnelAddr: 10.42.4.66
      rke2.io/encryption-config-hash: start-2e5624c94b57a5493386e7be1be19db400e9a44867e93b63461bb6600e0c9e3d
      rke2.io/hostname: gzr750-132
      rke2.io/internal-ip: 172.40.20.132
      rke2.io/node-args: '["server","--agent-token","********","--cni","calico","--disable-kube-proxy","false","--etcd-expose-metrics","false","--etcd-snapshot-retention","5","--etcd-snapshot-schedule-cron","0
        */5 * * *","--kube-controller-manager-arg","cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager","--kube-controller-manager-arg","secure-port=10257","--kube-controller-manager-extra-mount","/var/lib/rancher/rke2/server/tls/kube-controller-manager:/var/lib/rancher/rke2/server/tls/kube-controller-manager","--kube-scheduler-arg","cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler","--kube-scheduler-arg","secure-port=10259","--kube-scheduler-extra-mount","/var/lib/rancher/rke2/server/tls/kube-scheduler:/var/lib/rancher/rke2/server/tls/kube-scheduler","--node-label","cattle.io/os=linux","--node-label","rke.cattle.io/machine=bfac9925-e3bd-44a8-9315-908d78c2bae7","--private-registry","/etc/rancher/rke2/registries.yaml","--protect-kernel-defaults","false","--server","https://172.40.20.131:9345","--system-default-registry","registry.cn-hangzhou.aliyuncs.com","--token","********"]'
      rke2.io/node-config-hash: CRNW6BSB4ZQGZKBEKHLV5SWPJGB3IF5EBQ2OVR23WYSDAHBQLAUQ====
      rke2.io/node-env: '{}'
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2023-12-21T06:36:41Z"
    finalizers:
    - wrangler.cattle.io/managed-etcd-controller
    - wrangler.cattle.io/node
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/instance-type: rke2
      beta.kubernetes.io/os: linux
      cattle.io/os: linux
      feature.node.kubernetes.io/cpu-cpuid.ADX: "true"
      feature.node.kubernetes.io/cpu-cpuid.AESNI: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX2: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512BITALG: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512BW: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512CD: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512DQ: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512F: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512IFMA: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512VBMI: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512VBMI2: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512VL: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512VNNI: "true"
      feature.node.kubernetes.io/cpu-cpuid.AVX512VPOPCNTDQ: "true"
      feature.node.kubernetes.io/cpu-cpuid.FMA3: "true"
      feature.node.kubernetes.io/cpu-cpuid.GFNI: "true"
      feature.node.kubernetes.io/cpu-cpuid.IBPB: "true"
      feature.node.kubernetes.io/cpu-cpuid.SHA: "true"
      feature.node.kubernetes.io/cpu-cpuid.STIBP: "true"
      feature.node.kubernetes.io/cpu-cpuid.VAES: "true"
      feature.node.kubernetes.io/cpu-cpuid.VMX: "true"
      feature.node.kubernetes.io/cpu-cpuid.VPCLMULQDQ: "true"
      feature.node.kubernetes.io/cpu-cpuid.WBNOINVD: "true"
      feature.node.kubernetes.io/cpu-hardware_multithreading: "false"
      feature.node.kubernetes.io/cpu-rdt.RDTCMT: "true"
      feature.node.kubernetes.io/cpu-rdt.RDTL3CA: "true"
      feature.node.kubernetes.io/cpu-rdt.RDTMBA: "true"
      feature.node.kubernetes.io/cpu-rdt.RDTMBM: "true"
      feature.node.kubernetes.io/cpu-rdt.RDTMON: "true"
      feature.node.kubernetes.io/custom-rdma.available: "true"
      feature.node.kubernetes.io/custom-rdma.capable: "true"
      feature.node.kubernetes.io/kernel-config.NO_HZ: "true"
      feature.node.kubernetes.io/kernel-config.NO_HZ_IDLE: "true"
      feature.node.kubernetes.io/kernel-version.full: 5.4.0-169-generic
      feature.node.kubernetes.io/kernel-version.major: "5"
      feature.node.kubernetes.io/kernel-version.minor: "4"
      feature.node.kubernetes.io/kernel-version.revision: "0"
      feature.node.kubernetes.io/memory-numa: "true"
      feature.node.kubernetes.io/pci-14e4.present: "true"
      feature.node.kubernetes.io/pci-15b3.present: "true"
      feature.node.kubernetes.io/pci-8086.present: "true"
      feature.node.kubernetes.io/pci-8086.sriov.capable: "true"
      feature.node.kubernetes.io/storage-nonrotationaldisk: "true"
      feature.node.kubernetes.io/system-os_release.ID: ubuntu
      feature.node.kubernetes.io/system-os_release.VERSION_ID: "20.04"
      feature.node.kubernetes.io/system-os_release.VERSION_ID.major: "20"
      feature.node.kubernetes.io/system-os_release.VERSION_ID.minor: "04"
      kubernetes.io/arch: amd64
      kubernetes.io/hostname: gzr750-132
      kubernetes.io/os: linux
      network.nvidia.com/operator.mofed.wait: "false"
      node-role.kubernetes.io/control-plane: "true"
      node-role.kubernetes.io/etcd: "true"
      node-role.kubernetes.io/master: "true"
      node-role.kubernetes.io/worker: "true"
      node.kubernetes.io/instance-type: rke2
      plan.upgrade.cattle.io/system-agent-upgrader: d3afd4eb884edc7a77db901446479abc45b155929a9d0ef1cb138405
      rke.cattle.io/machine: bfac9925-e3bd-44a8-9315-908d78c2bae7
      sname: s132
    name: gzr750-132
    resourceVersion: "42232353"
    uid: 845b26ba-5b1b-4447-b163-6058279ed1aa
  spec:
    podCIDR: 10.42.1.0/24
    podCIDRs:
    - 10.42.1.0/24
    providerID: rke2://gzr750-132
  status:
    addresses:
    - address: 172.40.20.132
      type: InternalIP
    - address: gzr750-132
      type: Hostname
    allocatable:
      cpu: "32"
      ephemeral-storage: "199729482391"
      hugepages-1Gi: "0"
      hugepages-2Mi: 80480Mi
      memory: 445397740Ki
      pods: "110"
      rdma/rdma_shared_device_a: 1k
    capacity:
      cpu: "32"
      ephemeral-storage: 205314024Ki
      hugepages-1Gi: "0"
      hugepages-2Mi: 80480Mi
      memory: 527809260Ki
      pods: "110"
      rdma/rdma_shared_device_a: 1k
    conditions:
    - lastHeartbeatTime: "2024-01-24T01:58:16Z"
      lastTransitionTime: "2024-01-24T01:58:16Z"
      message: Calico is running on this node
      reason: CalicoIsUp
      status: "False"
      type: NetworkUnavailable
    - lastHeartbeatTime: "2024-02-26T09:54:42Z"
      lastTransitionTime: "2023-12-29T02:30:41Z"
      message: kubelet has sufficient memory available
      reason: KubeletHasSufficientMemory
      status: "False"
      type: MemoryPressure
    - lastHeartbeatTime: "2024-02-26T09:54:42Z"
      lastTransitionTime: "2023-12-29T02:30:41Z"
      message: kubelet has no disk pressure
      reason: KubeletHasNoDiskPressure
      status: "False"
      type: DiskPressure
    - lastHeartbeatTime: "2024-02-26T09:54:42Z"
      lastTransitionTime: "2023-12-29T02:30:41Z"
      message: kubelet has sufficient PID available
      reason: KubeletHasSufficientPID
      status: "False"
      type: PIDPressure
    - lastHeartbeatTime: "2024-02-26T09:54:42Z"
      lastTransitionTime: "2024-01-06T16:03:02Z"
      message: kubelet is posting ready status. AppArmor enabled
      reason: KubeletReady
      status: "True"
      type: Ready
    daemonEndpoints:
      kubeletEndpoint:
        Port: 10250
    images:
    - names:
      - m.daocloud.io/quay.io/cephcsi/cephcsi@sha256:5dd50ad6f3f9a1e8c8186fde0048ea241f056ca755acbeab42f5ebf723313e9c
      - quay.io/cephcsi/cephcsi@sha256:5dd50ad6f3f9a1e8c8186fde0048ea241f056ca755acbeab42f5ebf723313e9c
      - m.daocloud.io/quay.io/cephcsi/cephcsi:v3.10.1
      - quay.io/cephcsi/cephcsi:v3.10.1
      sizeBytes: 746084802
    - names:
      - docker.io/rancher/rancher-agent@sha256:8265848ee065fac0e20774aec497ce3ee3c421774e20b312894c0390bd5759ec
      - registry.cn-hangzhou.aliyuncs.com/rancher/rancher-agent@sha256:926154282389fbf70a21ccdcf690561655136f7b287357d860eb637752f9c304
      - docker.io/rancher/rancher-agent:v2.8.0
      - registry.cn-hangzhou.aliyuncs.com/rancher/rancher-agent:v2.8.0
      sizeBytes: 592215370
    - names:
      - docker.io/build-b78dff2a/ceph-amd64:latest
      sizeBytes: 531916868
    - names:
      - docker.io/rook/ceph@sha256:bf7833f0b3a65a71be36c7a87b83fb22b5df78dba058e4401169cdabe0b09e05
      - m.daocloud.io/docker.io/rook/ceph@sha256:bf7833f0b3a65a71be36c7a87b83fb22b5df78dba058e4401169cdabe0b09e05
      - docker.io/rook/ceph:v1.13.1
      - m.daocloud.io/docker.io/rook/ceph:v1.13.1
      sizeBytes: 467728574
    - names:
      - quay.io/ceph/ceph@sha256:e40c19cd70e047d14d70f5ec3cf501da081395a670cd59ca881ff56119660c8f
      - quay.io/ceph/ceph:v17.2.6
      sizeBytes: 447961121
    - names:
      - m.daocloud.io/quay.io/ceph/ceph@sha256:aca35483144ab3548a7f670db9b79772e6fc51167246421c66c0bd56a6585468
      - m.daocloud.io/quay.io/ceph/ceph:v18.2.1
      sizeBytes: 446773193
    - names:
      - m.daocloud.io/docker.io/rook/ceph@sha256:3fd9ea4b7da18d36a87674b6a3420689ccacfabe2d80aa17443b09d9ad34ac98
      - m.daocloud.io/docker.io/rook/ceph:v1.12.10
      sizeBytes: 437800570
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rancher/nginx-ingress-controller@sha256:572f459ba4a8b1f842887af30c0955a0fd7bd446a3ae914047eb903afdbb8d52
      - registry.cn-hangzhou.aliyuncs.com/rancher/nginx-ingress-controller:nginx-1.9.3-hardened1
      sizeBytes: 334038552
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-kubernetes@sha256:154a46c8fc1fb6de02247c56b37a76fb8f3f3ddbf206d5c1084cc409c214f233
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-kubernetes:v1.27.7-rke2r2-build20231102
      sizeBytes: 217546532
    - names:
      - docker.io/rancherlabs/swiss-army-knife@sha256:af25a3ace6269adb9e494b693644bc2f897ec872076d78f78bc5ded69f2ee222
      - docker.io/rancherlabs/swiss-army-knife:latest
      sizeBytes: 182366922
    - names:
      - docker.io/rancher/mirrored-calico-cni@sha256:d4ed12d28127c9570bf773016857c8cdc20d7862eaebd74d3d0fc7b345cc74f7
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-cni@sha256:86779fab56f3c0c51abcae6d5c5d712f54ed86b50eebf83e54b8c80fdcb4a76e
      - docker.io/rancher/mirrored-calico-cni:v3.26.1
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-cni:v3.26.1
      sizeBytes: 93375345
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rancher/klipper-helm@sha256:47123689197706833e651d0743687fa99abb61d7bef1d47a4fdd1e7b3a99729e
      - registry.cn-hangzhou.aliyuncs.com/rancher/klipper-helm:v0.8.2-build20230815
      sizeBytes: 90876370
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-node@sha256:9459d1b2831955120fdf0037e6816b21e5d88dd11110d6d89398e5ef53cdf54c
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-node:v3.26.1
      sizeBytes: 86590521
    - names:
      - docker.io/rancher/rke2-cloud-provider@sha256:e1383c853e75a46ab2eeeec4a0808140289d789bfe52ff283abf572d1b8c73fa
      - registry.cn-hangzhou.aliyuncs.com/rancher/rke2-cloud-provider@sha256:a125362d1311d2c14df3d98aafbcff0ea07dcce14684821e8e39436f891f690a
      - docker.io/rancher/rke2-cloud-provider:v1.28.2-build20231016
      - registry.cn-hangzhou.aliyuncs.com/rancher/rke2-cloud-provider:v1.28.2-build20231016
      sizeBytes: 68010954
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-etcd@sha256:61e610a7e0489b2a590e7f1c6dc7d1c992ce96d149517bb3f8e99eb3aeb1e42a
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-etcd:v3.5.9-k3s1-build20230802
      sizeBytes: 64400998
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-coredns@sha256:b111e041ebb8d1cb165fd89ae418cc92f903928164626236cb66d8ff1b273308
      - registry.cn-hangzhou.aliyuncs.com/rancher/hardened-coredns:v1.10.1-build20230607
      sizeBytes: 64396462
    - names:
      - k8s.gcr.io/nfd/node-feature-discovery:v0.10.1
      - m.daocloud.io/k8s.gcr.io/nfd/node-feature-discovery:v0.10.1
      sizeBytes: 60200770
    - names:
      - m.daocloud.io/nvcr.io/nvidia/cloud-native/k8s-rdma-shared-dev-plugin@sha256:941ad9ff5013e9e7ad5abeb0ea9f79d45379cfae88a628d923f87d2259bdd132
      - m.daocloud.io/nvcr.io/nvidia/cloud-native/k8s-rdma-shared-dev-plugin:v1.3.2
      sizeBytes: 57690263
    - names:
      - docker.io/library/ubuntu@sha256:6042500cf4b44023ea1894effe7890666b0c5c7871ed83a97c36c76ae560bb9b
      - docker.io/library/ubuntu:22.04
      sizeBytes: 29551341
    - names:
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-provisioner@sha256:49b94f975603d85a1820b72b1188e5b351d122011b3e5351f98c49d72719aa78
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-provisioner:v3.6.2
      sizeBytes: 28685505
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rancher/system-agent@sha256:f0b9e2f3f6507c76be2f3ee407efddf5e25853c04284abb5c10ff1e323cbbd48
      - registry.cn-hangzhou.aliyuncs.com/rancher/system-agent:v0.3.4-suc
      sizeBytes: 27723618
    - names:
      - docker.io/library/ubuntu@sha256:f2034e7195f61334e6caff6ecf2e965f92d11e888309065da85ff50c617732b8
      - docker.io/library/ubuntu:20.04
      sizeBytes: 27516629
    - names:
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-resizer@sha256:e998f22243869416f9860fc6a1fb07d4202eac8846defc1b85ebd015c1207605
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-resizer:v1.9.2
      sizeBytes: 27017242
    - names:
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-snapshotter@sha256:4c5a1b57e685b2631909b958487f65af7746361346fcd82a8635bea3ef14509d
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2
      sizeBytes: 26802941
    - names:
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-attacher@sha256:11b955fe4da278aa0e8ca9d6fd70758f2aec4b0c1e23168c665ca345260f1882
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-attacher:v4.4.2
      sizeBytes: 26688379
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-operator@sha256:d6e8c1a76ffb2e70f3925ad91e8ccb6c0662e89bab7d76f241557a9771d7749f
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-operator:v1.30.4
      sizeBytes: 21215581
    - names:
      - docker.io/rancher/local-path-provisioner@sha256:ae8bbedd61a2c1d12381e837751a0f69bbf13ce7cbd5808b586a92232579393d
      - docker.io/rancher/local-path-provisioner:v0.0.25
      sizeBytes: 15763373
    - names:
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:2cddcc716c1930775228d56b0d2d339358647629701047edfdad5fcdfaf4ebcb
      - m.daocloud.io/registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1
      sizeBytes: 10755082
    - names:
      - docker.io/rancher/mirrored-calico-pod2daemon-flexvol@sha256:f490933d59c85bfb33530b762aa8040d9810e2da1c2fb3e039118bfaed2de14c
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-pod2daemon-flexvol@sha256:1f99e783eaef47c62c53f0090b0eba5d0e9a43674fb5faba3ed6041cc5a0ecb5
      - docker.io/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-calico-pod2daemon-flexvol:v3.26.1
      sizeBytes: 7289478
    - names:
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-pause@sha256:74bf6fc6be13c4ec53a86a5acf9fdbc6787b176db0693659ad6ac89f115e182c
      - registry.cn-hangzhou.aliyuncs.com/rancher/mirrored-pause:3.6
      sizeBytes: 297944
    nodeInfo:
      architecture: amd64
      bootID: 9577e1e4-123c-4394-8d77-a290726fa6db
      containerRuntimeVersion: containerd://1.7.7-k3s1
      kernelVersion: 5.4.0-169-generic
      kubeProxyVersion: v1.27.7+rke2r2
      kubeletVersion: v1.27.7+rke2r2
      machineID: c400a900011b4f85bb9700b4a0dc8321
      operatingSystem: linux
      osImage: Ubuntu 20.04.3 LTS
      systemUUID: 4c4c4544-0044-4b10-8031-b6c04f595733
    volumesAttached:
    - devicePath: ""
      name: kubernetes.io/csi/rook-ceph.rbd.csi.ceph.com^0001-0009-rook-ceph-0000000000000001-492b7521-c31e-4566-8b0c-e6d0a0d54542
    volumesInUse:
    - kubernetes.io/csi/rook-ceph.rbd.csi.ceph.com^0001-0009-rook-ceph-0000000000000001-492b7521-c31e-4566-8b0c-e6d0a0d54542
kind: List
metadata:
  resourceVersion: ""
  </p>
</details>

**Environment**:
- Kubernetes version (use `kubectl version`): 

kubectl version

WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.5", GitCommit:"890a139214b4de1f01543d15003b5bda71aae9c7", GitTreeState:"clean", BuildDate:"2023-05-17T14:14:46Z", GoVersion:"go1.19.9", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.7+rke2r2", GitCommit:"07a61d861519c45ef5c89bc22dda289328f29343", GitTreeState:"clean", BuildDate:"2023-11-02T16:18:38Z", GoVersion:"go1.20.10 X:boringcrypto", Compiler:"gc", Platform:"linux/amd64"}


- Hardware configuration:
  - Network adapter model and firmware version: Mellanox Technologies MT27800 Family [ConnectX-5]
- OS (e.g: `cat /etc/os-release`): Ubuntu 20.04.3
- Kernel (e.g. `uname -a`): Linux 5.4.0-169-generic # 187-Ubuntu x86_64
- Others:
Saigut commented 8 months ago

The problem was gone after I used network-operator 24.1.0.