kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.85k stars 4.64k forks source link

Cluster not validating when using Calico, IPv6, and Gossip #14149

Closed IgalSc closed 1 year ago

IgalSc commented 2 years ago

/kind bug

1. What kops version are you running? The command kops version, will display this information. kops version Client version: 1.24.1 (git-v1.24.1) 2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag. kubectl version --short Client Version: v1.24.2 Kustomize Version: v4.5.4 Server Version: v1.24.3

3. What cloud provider are you using? AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

export KOPS_STATE_STORE=s3://clustername-kops-state-store
export VPC_ID=vpc-ID
export MASTER_SIZE="t3a.medium"
export NODE_SIZE="c6a.large"
kops create cluster --vpc $VPC_ID \
                    --node-count 2 \
                    --zones us-east-1a,us-east-1b \
                    --master-zones us-east-1a,us-east-1b \
                    --node-size $NODE_SIZE  \
                    --master-count 3 \
                    --master-size $MASTER_SIZE  \
                    --networking calico \
                    --ipv6

then kops edit cluster then edited my nodes groups, then

kops update cluster --yes --admin

5. What happened after the commands executed? Cluster is being configured but fails to validate

VALIDATION ERRORS KIND NAME MESSAGE Machine i-011f0509423dbc7b4 machine "i-011f0509423dbc7b4" has not yet joined cluster Machine i-099aec2b7d6bc2ad2 machine "i-099aec2b7d6bc2ad2" has not yet joined cluster Pod kube-system/coredns-autoscaler-865477f6c7-62zsq system-cluster-critical pod "coredns-autoscaler-865477f6c7-62zsq" is pending Pod kube-system/coredns-d48868b66-jb6ds system-cluster-critical pod "coredns-d48868b66-jb6ds" is pending

When i try to describe the pod kubectl -n kube-system describe pod coredns-d48868b66-jb6ds

I get Events: Type Reason Age From Message


Warning FailedScheduling 2m34s default-scheduler 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

The master instances become "OutOfService" behind the LoadBalancer after about 10-15 minutes 6. What did you expect to happen? Same as when not using --ipv6: Cluster is ready after ~10 min

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

kind: Cluster
metadata:
  creationTimestamp: "2022-07-20T19:39:05Z"
  generation: 3
  name: clustername.k8s.local
spec:
  api:
    dns: {}
    loadBalancer:
      class: Network
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudControllerManager:
    cloudProvider: aws
  configBase: s3://clustername1-kops-state-store/clustername.k8s.local
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-east-1a-1
      name: a-1
    - encryptedVolume: true
      instanceGroup: master-us-east-1b-1
      name: b-1
    - encryptedVolume: true
      instanceGroup: master-us-east-1a-2
      name: a-2
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-east-1a-1
      name: a-1
    - encryptedVolume: true
      instanceGroup: master-us-east-1b-1
      name: b-1
    - encryptedVolume: true
      instanceGroup: master-us-east-1a-2
      name: a-2
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook
  kubernetesApiAccess:
  - 0.0.0.0/0
  - ::/0
  kubernetesVersion: 1.23.9
  masterInternalName: api.internal.clustername.k8s.local
  masterPublicName: api.clustername.k8s.local
  networkCIDR: 172.30.0.0/16
  networkID: vpc-ID
  networking:
    calico {}
  nonMasqueradeCIDR: ::/0
  sshAccess:
  - 0.0.0.0/0
  - ::/0
  subnets:
  - cidr: 172.30.32.0/19
    ipv6CIDR: 2600:a:b:c::/64
    name: us-east-1a
    type: Public
    zone: us-east-1a
  - cidr: 172.30.64.0/19
    ipv6CIDR: 2600:a:b:d::/64
    name: us-east-1b
    type: Public
    zone: us-east-1b
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: clustername.k8s.local
  name: master-us-east-1a-1
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20220810
  instanceMetadata:
    httpPutResponseHopLimit: 3
    httpTokens: required
  machineType: t3a.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-1a-1
  role: Master
  subnets:
  - us-east-1b

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: clustername.k8s.local
  name: master-us-east-1b-1
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20220810
  instanceMetadata:
    httpPutResponseHopLimit: 3
    httpTokens: required
  machineType: t3a.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-1b-1
  role: Master
  subnets:
  - us-east-1b

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: clustername.k8s.local
  name: master-us-east-1a-2
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20220810
  instanceMetadata:
    httpPutResponseHopLimit: 3
    httpTokens: required
  machineType: t3a.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-1a-2
  role: Master
  subnets:
  - us-east-1a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: clustername.k8s.local
  name: nodes-us-east-1a
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20220810
  instanceMetadata:
    httpPutResponseHopLimit: 1
    httpTokens: required
  machineType: c6a.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-us-east-1a
  role: Node
  subnets:
  - us-east-1a

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: null
  labels:
    kops.k8s.io/cluster: clustername.k8s.local
  name: nodes-us-east-1b
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20220810
  instanceMetadata:
    httpPutResponseHopLimit: 1
    httpTokens: required
  machineType: c6a.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: nodes-us-east-1b
  role: Node
  subnets:
  - us-east-1b
IgalSc commented 2 years ago

When I try to runkubectl taint nodes --all node-role.kubernetes.io/control-plane I get error: at least one taint update is required

kubectl describe nodes | grep -i taint

Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Taints:             node-role.kubernetes.io/control-plane:NoSchedule

i tried kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule but got the following

node i-0b40f2109e6674322 already has node-role.kubernetes.io/control-plane taint(s) with same effect(s) and --overwrite is false
node i-0bf0b45d1e48aa447 already has node-role.kubernetes.io/control-plane taint(s) with same effect(s) and --overwrite is false
node i-0e0a3bd9848c42408 already has node-role.kubernetes.io/control-plane taint(s) with same effect(s) and --overwrite is false
olemarkus commented 2 years ago

I am not sure why you are trying to taint your nodes.

The reason for the validation failure is that the worker nodes are not joining the cluster:

Machine i-011f0509423dbc7b4 machine "i-011f0509423dbc7b4" has not yet joined cluster
Machine i-099aec2b7d6bc2ad2 machine "i-099aec2b7d6bc2ad2" has not yet joined cluster

coredns and other pods does not run on control plane, but on the worker nodes, which is why they are in pending state. I can't say based on the info provided why that is. But I see you are using jammy, which has a number of known issues (#14140), so you could try using focal instead.

IgalSc commented 2 years ago

@olemarkus Based on the kops documentation, Running IPv6 with Calico requires a Debian 11 or Ubuntu 22.04 based AMI. I tried focal before switching to jammy. I'm trying to taint because that's the error shown on the coredns pod. I'm not sure if I'm not missing anything Calico setup wise. The same setup without --ipv6 flag works fine, but I do need ipv6 connectivity on the worker nodes.

IgalSc commented 2 years ago

What's interesting is that if i try kops rolling-update cluster --yes I get the following

master-us-east-1a-1     Ready   0               1       1       1       1       1
master-us-east-1a-2     Ready   0               1       1       1       1       1
master-us-east-1b-1     Ready   0               1       1       1       1       1
nodes-us-east-1a        Ready   0               1       1       1       3       0
nodes-us-east-1b        Ready   0               1       1       1       3       0

No rolling-update required.

But trying to validate the cluster kops validate cluster I get



INSTANCE GROUPS
NAME                    ROLE    MACHINETYPE     MIN     MAX     SUBNETS
master-us-east-1a-1     Master  t3a.medium      1       1       us-east-1a
master-us-east-1a-2     Master  t3a.medium      1       1       us-east-1a
master-us-east-1b-1     Master  t3a.medium      1       1       us-east-1b
nodes-us-east-1a        Node    c6a.large       1       3       us-east-1a
nodes-us-east-1b        Node    c6a.large       1       3       us-east-1b

NODE STATUS
NAME                    ROLE    READY
i-0237c9fb04a22250e     master  True
i-02f984e196d3e33de     master  True
i-0e5181fac440f3629     master  True

VALIDATION ERRORS
KIND    NAME                                            MESSAGE
Machine i-0121acd5c20ea1ca1                             machine "i-0121acd5c20ea1ca1" has not yet joined cluster
Machine i-01b4332bbe1edcc07                             machine "i-01b4332bbe1edcc07" has not yet joined cluster
Pod     kube-system/coredns-autoscaler-865477f6c7-rfjt9 system-cluster-critical pod "coredns-autoscaler-865477f6c7-rfjt9" is pending
Pod     kube-system/coredns-d48868b66-m4wnr             system-cluster-critical pod "coredns-d48868b66-m4wnr" is pending

Validation Failed
Error: Validation failed: cluster not yet healthy```
IgalSc commented 2 years ago

And, there are many errors on kube-scheduler

kubectl logs kube-scheduler-i-0237c9fb04a22250e -n kube-system

Command env: (log-file=/var/log/kube-scheduler.log, also-stdout=true, redirect-stderr=true)
Run from directory:
Executable path: /usr/local/bin/kube-scheduler
Args (comma-delimited): /usr/local/bin/kube-scheduler,--authentication-kubeconfig=/var/lib/kube-scheduler/kubeconfig,--authorization-kubeconfig=/var/lib/kube-scheduler/kubeconfig,--config=/var/lib/kube-scheduler/config.yaml,--feature-gates=CSIMigrationAWS=true,InTreePluginAWSUnregister=true,--leader-elect=true,--tls-cert-file=/srv/kubernetes/kube-scheduler/server.crt,--tls-private-key-file=/srv/kubernetes/kube-scheduler/server.key,--v=2
2022/08/20 13:33:40 Now listening for interrupts
I0820 13:33:40.732660       9 flags.go:64] FLAG: --add-dir-header="false"
I0820 13:33:40.732968       9 flags.go:64] FLAG: --allow-metric-labels="[]"
I0820 13:33:40.733152       9 flags.go:64] FLAG: --alsologtostderr="false"
I0820 13:33:40.733315       9 flags.go:64] FLAG: --authentication-kubeconfig="/var/lib/kube-scheduler/kubeconfig"
I0820 13:33:40.733472       9 flags.go:64] FLAG: --authentication-skip-lookup="false"
I0820 13:33:40.733642       9 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="10s"
I0820 13:33:40.733801       9 flags.go:64] FLAG: --authentication-tolerate-lookup-failure="true"
I0820 13:33:40.733957       9 flags.go:64] FLAG: --authorization-always-allow-paths="[/healthz,/readyz,/livez]"
I0820 13:33:40.734120       9 flags.go:64] FLAG: --authorization-kubeconfig="/var/lib/kube-scheduler/kubeconfig"
I0820 13:33:40.734287       9 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
I0820 13:33:40.734460       9 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
I0820 13:33:40.734627       9 flags.go:64] FLAG: --bind-address="0.0.0.0"
I0820 13:33:40.734791       9 flags.go:64] FLAG: --cert-dir=""
I0820 13:33:40.734951       9 flags.go:64] FLAG: --client-ca-file=""
I0820 13:33:40.735107       9 flags.go:64] FLAG: --config="/var/lib/kube-scheduler/config.yaml"
I0820 13:33:40.735266       9 flags.go:64] FLAG: --contention-profiling="true"
I0820 13:33:40.735416       9 flags.go:64] FLAG: --disabled-metrics="[]"
I0820 13:33:40.735646       9 flags.go:64] FLAG: --feature-gates="CSIMigrationAWS=true,InTreePluginAWSUnregister=true"
I0820 13:33:40.735824       9 flags.go:64] FLAG: --help="false"
I0820 13:33:40.735980       9 flags.go:64] FLAG: --http2-max-streams-per-connection="0"
I0820 13:33:40.736138       9 flags.go:64] FLAG: --kube-api-burst="100"
I0820 13:33:40.736299       9 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0820 13:33:40.736450       9 flags.go:64] FLAG: --kube-api-qps="50"
I0820 13:33:40.736608       9 flags.go:64] FLAG: --kubeconfig=""
I0820 13:33:40.736774       9 flags.go:64] FLAG: --leader-elect="true"
I0820 13:33:40.736908       9 flags.go:64] FLAG: --leader-elect-lease-duration="15s"
I0820 13:33:40.737068       9 flags.go:64] FLAG: --leader-elect-renew-deadline="10s"
I0820 13:33:40.737206       9 flags.go:64] FLAG: --leader-elect-resource-lock="leases"
I0820 13:33:40.737357       9 flags.go:64] FLAG: --leader-elect-resource-name="kube-scheduler"
I0820 13:33:40.737508       9 flags.go:64] FLAG: --leader-elect-resource-namespace="kube-system"
I0820 13:33:40.737671       9 flags.go:64] FLAG: --leader-elect-retry-period="2s"
I0820 13:33:40.737826       9 flags.go:64] FLAG: --lock-object-name="kube-scheduler"
I0820 13:33:40.737977       9 flags.go:64] FLAG: --lock-object-namespace="kube-system"
I0820 13:33:40.738136       9 flags.go:64] FLAG: --log-backtrace-at=":0"
I0820 13:33:40.738290       9 flags.go:64] FLAG: --log-dir=""
I0820 13:33:40.738448       9 flags.go:64] FLAG: --log-file=""
I0820 13:33:40.738597       9 flags.go:64] FLAG: --log-file-max-size="1800"
I0820 13:33:40.738751       9 flags.go:64] FLAG: --log-flush-frequency="5s"
I0820 13:33:40.738903       9 flags.go:64] FLAG: --log-json-info-buffer-size="0"
I0820 13:33:40.739078       9 flags.go:64] FLAG: --log-json-split-stream="false"
I0820 13:33:40.739236       9 flags.go:64] FLAG: --logging-format="text"
I0820 13:33:40.739395       9 flags.go:64] FLAG: --logtostderr="true"
I0820 13:33:40.739547       9 flags.go:64] FLAG: --master=""
I0820 13:33:40.739704       9 flags.go:64] FLAG: --one-output="false"
I0820 13:33:40.739853       9 flags.go:64] FLAG: --permit-address-sharing="false"
I0820 13:33:40.740007       9 flags.go:64] FLAG: --permit-port-sharing="false"
I0820 13:33:40.740156       9 flags.go:64] FLAG: --pod-max-in-unschedulable-pods-duration="5m0s"
I0820 13:33:40.740310       9 flags.go:64] FLAG: --profiling="true"
I0820 13:33:40.740461       9 flags.go:64] FLAG: --requestheader-allowed-names="[]"
I0820 13:33:40.740625       9 flags.go:64] FLAG: --requestheader-client-ca-file=""
I0820 13:33:40.740796       9 flags.go:64] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]"
I0820 13:33:40.740961       9 flags.go:64] FLAG: --requestheader-group-headers="[x-remote-group]"
I0820 13:33:40.741121       9 flags.go:64] FLAG: --requestheader-username-headers="[x-remote-user]"
I0820 13:33:40.741280       9 flags.go:64] FLAG: --secure-port="10259"
I0820 13:33:40.741430       9 flags.go:64] FLAG: --show-hidden-metrics-for-version=""
I0820 13:33:40.741585       9 flags.go:64] FLAG: --skip-headers="false"
I0820 13:33:40.741734       9 flags.go:64] FLAG: --skip-log-headers="false"
I0820 13:33:40.741888       9 flags.go:64] FLAG: --stderrthreshold="2"
I0820 13:33:40.742036       9 flags.go:64] FLAG: --tls-cert-file="/srv/kubernetes/kube-scheduler/server.crt"
I0820 13:33:40.742199       9 flags.go:64] FLAG: --tls-cipher-suites="[]"
I0820 13:33:40.742365       9 flags.go:64] FLAG: --tls-min-version=""
I0820 13:33:40.742518       9 flags.go:64] FLAG: --tls-private-key-file="/srv/kubernetes/kube-scheduler/server.key"
I0820 13:33:40.742673       9 flags.go:64] FLAG: --tls-sni-cert-key="[]"
I0820 13:33:40.742837       9 flags.go:64] FLAG: --v="2"
I0820 13:33:40.743017       9 flags.go:64] FLAG: --version="false"
I0820 13:33:40.743171       9 flags.go:64] FLAG: --vmodule=""
I0820 13:33:40.743333       9 flags.go:64] FLAG: --write-config-to=""
I0820 13:33:40.745214       9 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key"
W0820 13:33:52.162837       9 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
W0820 13:33:52.162901       9 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0820 13:33:52.162988       9 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0820 13:34:01.290044       9 configfile.go:96] "Using component config" config=<
        apiVersion: kubescheduler.config.k8s.io/v1beta2
        clientConnection:
          acceptContentTypes: ""
          burst: 100
          contentType: application/vnd.kubernetes.protobuf
          kubeconfig: /var/lib/kube-scheduler/kubeconfig
          qps: 50
        enableContentionProfiling: true
        enableProfiling: true
        healthzBindAddress: ""
        kind: KubeSchedulerConfiguration
        leaderElection:
          leaderElect: true
          leaseDuration: 15s
          renewDeadline: 10s
          resourceLock: leases
          resourceName: kube-scheduler
          resourceNamespace: kube-system
          retryPeriod: 2s
        metricsBindAddress: ""
        parallelism: 16
        percentageOfNodesToScore: 0
        podInitialBackoffSeconds: 1
        podMaxBackoffSeconds: 10
        profiles:
        - pluginConfig:
          - args:
              apiVersion: kubescheduler.config.k8s.io/v1beta2
              kind: DefaultPreemptionArgs
              minCandidateNodesAbsolute: 100
              minCandidateNodesPercentage: 10
            name: DefaultPreemption
          - args:
              apiVersion: kubescheduler.config.k8s.io/v1beta2
              hardPodAffinityWeight: 1
              kind: InterPodAffinityArgs
            name: InterPodAffinity
          - args:
              apiVersion: kubescheduler.config.k8s.io/v1beta2
              kind: NodeAffinityArgs
            name: NodeAffinity
          - args:
              apiVersion: kubescheduler.config.k8s.io/v1beta2
              kind: NodeResourcesBalancedAllocationArgs
              resources:
              - name: cpu
                weight: 1
              - name: memory
                weight: 1
            name: NodeResourcesBalancedAllocation
          - args:
              apiVersion: kubescheduler.config.k8s.io/v1beta2
              kind: NodeResourcesFitArgs
              scoringStrategy:
                resources:
                - name: cpu
                  weight: 1
                - name: memory
                  weight: 1
                type: LeastAllocated
            name: NodeResourcesFit
          - args:
              apiVersion: kubescheduler.config.k8s.io/v1beta2
              defaultingType: System
              kind: PodTopologySpreadArgs
            name: PodTopologySpread
          - args:
              apiVersion: kubescheduler.config.k8s.io/v1beta2
              bindTimeoutSeconds: 600
              kind: VolumeBindingArgs
            name: VolumeBinding
          plugins:
            bind:
              enabled:
              - name: DefaultBinder
                weight: 0
            filter:
              enabled:
              - name: NodeUnschedulable
                weight: 0
              - name: NodeName
                weight: 0
              - name: TaintToleration
                weight: 0
              - name: NodeAffinity
                weight: 0
              - name: NodePorts
                weight: 0
              - name: NodeResourcesFit
                weight: 0
              - name: VolumeRestrictions
                weight: 0
              - name: EBSLimits
                weight: 0
              - name: GCEPDLimits
                weight: 0
              - name: NodeVolumeLimits
                weight: 0
              - name: AzureDiskLimits
                weight: 0
              - name: VolumeBinding
                weight: 0
              - name: VolumeZone
                weight: 0
              - name: PodTopologySpread
                weight: 0
              - name: InterPodAffinity
                weight: 0
            multiPoint: {}
            permit: {}
            postBind: {}
            postFilter:
              enabled:
              - name: DefaultPreemption
                weight: 0
            preBind:
              enabled:
              - name: VolumeBinding
                weight: 0
            preFilter:
              enabled:
              - name: NodeResourcesFit
                weight: 0
              - name: NodePorts
                weight: 0
              - name: VolumeRestrictions
                weight: 0
              - name: PodTopologySpread
                weight: 0
              - name: InterPodAffinity
                weight: 0
              - name: VolumeBinding
                weight: 0
              - name: NodeAffinity
                weight: 0
            preScore:
              enabled:
              - name: InterPodAffinity
                weight: 0
              - name: PodTopologySpread
                weight: 0
              - name: TaintToleration
                weight: 0
              - name: NodeAffinity
                weight: 0
            queueSort:
              enabled:
              - name: PrioritySort
                weight: 0
            reserve:
              enabled:
              - name: VolumeBinding
                weight: 0
            score:
              enabled:
              - name: NodeResourcesBalancedAllocation
                weight: 1
              - name: ImageLocality
                weight: 1
              - name: InterPodAffinity
                weight: 1
              - name: NodeResourcesFit
                weight: 1
              - name: NodeAffinity
                weight: 1
              - name: PodTopologySpread
                weight: 2
              - name: TaintToleration
                weight: 1
          schedulerName: default-scheduler
 >
I0820 13:34:01.293164       9 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
I0820 13:34:01.293229       9 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0820 13:34:01.296011       9 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0820 13:34:01.296162       9 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0820 13:34:01.296441       9 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key" certDetail="\"kube-scheduler\" [serving] validServingFor=[kube-scheduler.kube-system.svc.cluster.local] issuer=\"kubernetes-ca\" (2022-08-18 13:32:29 +0000 UTC to 2023-11-24 08:32:29 +0000 UTC (now=2022-08-20 13:34:01.296403454 +0000 UTC))"
I0820 13:34:01.296824       9 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1661002422\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1661002421\" (2022-08-20 12:33:40 +0000 UTC to 2023-08-20 12:33:40 +0000 UTC (now=2022-08-20 13:34:01.296772829 +0000 UTC))"
I0820 13:34:01.296866       9 secure_serving.go:210] Serving securely on [::]:10259
I0820 13:34:01.297363       9 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key"
W0820 13:34:01.299431       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.299505       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.299864       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.299929       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.300170       9 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.300241       9 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.300501       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.300555       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.300868       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.300953       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.301196       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.301268       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.301502       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.301557       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.301813       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.301896       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.302135       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.302190       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.302450       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.302509       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.302749       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.302805       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
I0820 13:34:01.303293       9 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0820 13:34:01.304031       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.304186       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.304413       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.304550       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.304884       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.305029       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:01.305600       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:01.305682       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:12.159443       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.159686       9 trace.go:205] Trace[93993514]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.158) (total time: 10001ms):
Trace[93993514]: ---"Objects listed" error:Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.159)
Trace[93993514]: [10.001564918s] [10.001564918s] END
E0820 13:34:12.159787       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.184177       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.184360       9 trace.go:205] Trace[2020689274]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.183) (total time: 10001ms):
Trace[2020689274]: ---"Objects listed" error:Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:34:12.184)
Trace[2020689274]: [10.001069382s] [10.001069382s] END
E0820 13:34:12.184456       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.253450       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.253515       9 trace.go:205] Trace[195493541]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.252) (total time: 10001ms):
Trace[195493541]: ---"Objects listed" error:Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.253)
Trace[195493541]: [10.001127586s] [10.001127586s] END
E0820 13:34:12.253529       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.386859       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.387256       9 trace.go:205] Trace[2016190505]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.385) (total time: 10001ms):
Trace[2016190505]: ---"Objects listed" error:Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.386)
Trace[2016190505]: [10.001461579s] [10.001461579s] END
E0820 13:34:12.387461       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.488367       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.488705       9 trace.go:205] Trace[464264619]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.486) (total time: 10001ms):
Trace[464264619]: ---"Objects listed" error:Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.488)
Trace[464264619]: [10.001801903s] [10.001801903s] END
E0820 13:34:12.488879       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.621050       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.621117       9 trace.go:205] Trace[639849359]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.620) (total time: 10000ms):
Trace[639849359]: ---"Objects listed" error:Get "https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:34:12.621)
Trace[639849359]: [10.00084599s] [10.00084599s] END
E0820 13:34:12.621133       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.654131       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.654255       9 trace.go:205] Trace[1895368551]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.653) (total time: 10001ms):
Trace[1895368551]: ---"Objects listed" error:Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.654)
Trace[1895368551]: [10.001130184s] [10.001130184s] END
E0820 13:34:12.654279       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.677989       9 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.678105       9 trace.go:205] Trace[836974515]: "Reflector ListAndWatch" name:pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (20-Aug-2022 13:34:02.677) (total time: 10000ms):
Trace[836974515]: ---"Objects listed" error:Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:34:12.677)
Trace[836974515]: [10.00097244s] [10.00097244s] END
E0820 13:34:12.678122       9 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.726474       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.726776       9 trace.go:205] Trace[969491598]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.725) (total time: 10001ms):
Trace[969491598]: ---"Objects listed" error:Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.726)
Trace[969491598]: [10.001656179s] [10.001656179s] END
E0820 13:34:12.726818       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.766821       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.766897       9 trace.go:205] Trace[313774566]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.766) (total time: 10000ms):
Trace[313774566]: ---"Objects listed" error:Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:34:12.766)
Trace[313774566]: [10.000863709s] [10.000863709s] END
E0820 13:34:12.766936       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.780962       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.781055       9 trace.go:205] Trace[806398708]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.779) (total time: 10001ms):
Trace[806398708]: ---"Objects listed" error:Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.780)
Trace[806398708]: [10.001359345s] [10.001359345s] END
E0820 13:34:12.781163       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.827996       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.828076       9 trace.go:205] Trace[1932859828]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.826) (total time: 10001ms):
Trace[1932859828]: ---"Objects listed" error:Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.827)
Trace[1932859828]: [10.001483047s] [10.001483047s] END
E0820 13:34:12.828273       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.833597       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.833793       9 trace.go:205] Trace[1305340589]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.832) (total time: 10001ms):
Trace[1305340589]: ---"Objects listed" error:Get "https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.833)
Trace[1305340589]: [10.001354835s] [10.001354835s] END
E0820 13:34:12.833810       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.849270       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.849326       9 trace.go:205] Trace[1968290505]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.847) (total time: 10001ms):
Trace[1968290505]: ---"Objects listed" error:Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.849)
Trace[1968290505]: [10.001550987s] [10.001550987s] END
E0820 13:34:12.849345       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:12.860712       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0820 13:34:12.860877       9 trace.go:205] Trace[813012272]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:134 (20-Aug-2022 13:34:02.859) (total time: 10001ms):
Trace[813012272]: ---"Objects listed" error:Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:34:12.860)
Trace[813012272]: [10.001611008s] [10.001611008s] END
E0820 13:34:12.861003       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0820 13:34:22.873310       9 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47596->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.873491       9 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47596->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.873796       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47566->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.874039       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47566->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.874291       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47570->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.874578       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47570->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.874995       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47574->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.875145       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47574->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.875471       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47576->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.875656       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47576->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.875963       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47580->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.876258       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47580->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.876329       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47584->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.876590       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47584->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.876661       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47586->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.876862       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47586->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.877041       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47588->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.877079       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47588->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.877402       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47592->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.877454       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47592->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.877424       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47594->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.877937       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47598->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.877984       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47598->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.877950       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47594->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.877698       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47568->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.878254       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47568->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.878136       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47578->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.878283       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47578->127.0.0.1:443: read: connection reset by peer
W0820 13:34:22.875999       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47564->127.0.0.1:443: read: connection reset by peer
E0820 13:34:22.878311       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:47564->127.0.0.1:443: read: connection reset by peer
W0820 13:34:26.246605       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:26.246640       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:26.316963       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:26.316998       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:26.993781       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:26.993848       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:27.096630       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:27.096779       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:27.142641       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:27.142710       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:27.375693       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:27.375748       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:27.662663       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:27.662869       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://127.0.0.1/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:27.989750       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:27.990721       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:28.149763       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:28.149950       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://127.0.0.1/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:28.342566       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:28.342610       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:28.699364       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:28.699605       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:29.053999       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:29.054180       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:29.088874       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:29.089178       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:29.104272       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:29.104445       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:29.173543       9 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:29.173602       9 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:33.115856       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:33.115951       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:33.916989       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:33.917153       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://127.0.0.1/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:34.730352       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:34.730386       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:35.139818       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:35.139895       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:35.519999       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:35.520317       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:35.582879       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:35.582967       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:35.667314       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:35.667572       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://127.0.0.1/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:36.287148       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:36.287904       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:37.081463       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:37.081560       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://127.0.0.1/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:37.215808       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:37.215967       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://127.0.0.1/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:37.842090       9 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
E0820 13:34:37.842386       9 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://127.0.0.1/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
W0820 13:34:42.090399       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0820 13:34:42.095100       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0820 13:34:42.095452       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0820 13:34:42.095753       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0820 13:34:42.095519       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0820 13:34:42.096075       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0820 13:34:42.095580       9 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0820 13:34:42.096390       9 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
I0820 13:35:00.937867       9 node_tree.go:65] "Added node in listed group to NodeTree" node="i-0e5181fac440f3629" zone=""
I0820 13:35:00.938253       9 node_tree.go:65] "Added node in listed group to NodeTree" node="i-0237c9fb04a22250e" zone=""
I0820 13:35:00.938445       9 node_tree.go:65] "Added node in listed group to NodeTree" node="i-02f984e196d3e33de" zone=""
I0820 13:35:01.394388       9 node_tree.go:79] "Removed node in listed group from NodeTree" node="i-0e5181fac440f3629" zone=""
I0820 13:35:01.394458       9 node_tree.go:65] "Added node in listed group to NodeTree" node="i-0e5181fac440f3629" zone="us-east-1:\x00:us-east-1b"
I0820 13:35:01.730806       9 node_tree.go:79] "Removed node in listed group from NodeTree" node="i-0237c9fb04a22250e" zone=""
I0820 13:35:01.730840       9 node_tree.go:65] "Added node in listed group to NodeTree" node="i-0237c9fb04a22250e" zone="us-east-1:\x00:us-east-1a"
I0820 13:35:02.128790       9 node_tree.go:79] "Removed node in listed group from NodeTree" node="i-02f984e196d3e33de" zone=""
I0820 13:35:02.129045       9 node_tree.go:65] "Added node in listed group to NodeTree" node="i-02f984e196d3e33de" zone="us-east-1:\x00:us-east-1a"
I0820 13:35:02.896963       9 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0820 13:35:02.897277       9 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" certDetail="\"kubernetes-ca\" [] issuer=\"<self>\" (2022-08-18 13:31:34 +0000 UTC to 2032-08-17 13:31:34 +0000 UTC (now=2022-08-20 13:35:02.89724054 +0000 UTC))"
I0820 13:35:02.900201       9 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key" certDetail="\"kube-scheduler\" [serving] validServingFor=[kube-scheduler.kube-system.svc.cluster.local] issuer=\"kubernetes-ca\" (2022-08-18 13:32:29 +0000 UTC to 2023-11-24 08:32:29 +0000 UTC (now=2022-08-20 13:35:02.900140817 +0000 UTC))"
I0820 13:35:02.901290       9 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1661002422\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1661002421\" (2022-08-20 12:33:40 +0000 UTC to 2023-08-20 12:33:40 +0000 UTC (now=2022-08-20 13:35:02.901258901 +0000 UTC))"
I0820 13:35:03.897594       9 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...
root@ip-10-10-10-96:/#

But I'm not sure how to deal with them

olemarkus commented 2 years ago

kube-scheduler runs just fine. The problem is with your worked nodes not running the cluster. In order to debug this you need to ssh into the worker nodes and look at the output of journalctl -u kops-configuration this one should end with "kops successfully configured" journalctl -u kubelet this one should say somthing about the node being registered.

Ignore any errors talking about CNI or CSINode.

IgalSc commented 2 years ago

Thank you for your help, @olemarkus

Running journalctl -u kops-configuration I get lots of

error running task "BootstrapClientTask/BootstrapClient" (6m39s remaining to succeed): lookup kops-controller.internal.clustername.k8s.local on 127.0.0.53:53: server misbehaving

while runnning journalctl -u kubelet, I get -- Logs begin at Sun 2022-08-21 14:58:01 UTC, end at Sun 2022-08-21 15:04:34 UTC. -- -- No entries --

Checking the hosts file, i see the following:

cat /etc/hosts
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

# Begin host entries managed by kops - do not edit
# End host entries managed by kops

on one of the master nodes I get the following for the hosts file

cat /etc/hosts
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

# Begin host entries managed by kops - do not edit
127.0.0.1       api.internal.clustername.k8s.local api.clustername.k8s.local
# End host entries managed by kops

on the same master node running

journalctl -u kops-configuration

I get

Aug 21 15:38:21 i-0e781db7abf2f95a1 systemd[1]: kops-configuration.service: Succeeded.
Aug 21 15:38:21 i-0e781db7abf2f95a1 systemd[1]: Finished Run kOps bootstrap (nodeup).
olemarkus commented 2 years ago

Can you also look at journalctl -u protokube. Actually not sure how well ipv6 is tested when using gossip-based clusters.

IgalSc commented 2 years ago

@olemarkus

On the worker node:

Aug 22 08:43:05 i-073601269c9d00417 protokube[4142]: I0822 08:43:05.345621    4142 dns.go:47] DNSView unchanged: 5
Aug 22 08:43:10 i-073601269c9d00417 protokube[4142]: I0822 08:43:10.346603    4142 dns.go:47] DNSView unchanged: 5
Aug 22 08:43:15 i-073601269c9d00417 protokube[4142]: I0822 08:43:15.347658    4142 dns.go:47] DNSView unchanged: 5
Aug 22 08:43:15 i-073601269c9d00417 protokube[4142]: I0822 08:43:15.781400    4142 peer.go:111] OnGossip &KVState{Records:map[string]*KVStateRecord{dns/local/AAAA/api.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096420,},dns/local/AAAA/kops-controller.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096390,},dns/local/NS/local: &KVStateRecord{Data:[103 111 115 115 105 112],Tombstone:false,Version:1661096307,},},} => delta empty
Aug 22 08:43:16 i-073601269c9d00417 protokube[4142]: I0822 08:43:16.679022    4142 peer.go:111] OnGossip &KVState{Records:map[string]*KVStateRecord{dns/local/AAAA/api.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096420,},dns/local/AAAA/kops-controller.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096390,},dns/local/NS/local: &KVStateRecord{Data:[103 111 115 115 105 112],Tombstone:false,Version:1661096307,},},} => delta empty
Aug 22 08:43:19 i-073601269c9d00417 protokube[4142]: I0822 08:43:19.426354    4142 peer.go:111] OnGossip &KVState{Records:map[string]*KVStateRecord{dns/local/AAAA/api.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096420,},dns/local/AAAA/kops-controller.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096390,},dns/local/NS/local: &KVStateRecord{Data:[103 111 115 115 105 112],Tombstone:false,Version:1661096307,},},} => delta empty
Aug 22 08:43:20 i-073601269c9d00417 protokube[4142]: I0822 08:43:20.347807    4142 dns.go:47] DNSView unchanged: 5
Aug 22 08:43:25 i-073601269c9d00417 protokube[4142]: I0822 08:43:25.348886    4142 dns.go:47] DNSView unchanged: 5
Aug 22 08:43:27 i-073601269c9d00417 protokube[4142]: I0822 08:43:27.014253    4142 peer.go:93] Gossip => complete &KVState{Records:map[string]*KVStateRecord{dns/local/AAAA/api.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096420,},dns/local/AAAA/kops-controller.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096390,},dns/local/NS/local: &KVStateRecord{Data:[103 111 115 115 105 112],Tombstone:false,Version:1661096307,},},}

and this is on master node

Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: I0822 12:28:06.026155    3966 channels.go:31] checking channel: "s3://clustername-kops-state-store/clustername.k8s.local/addons/bootstrap-channel.yaml"
Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: I0822 12:28:06.026464    3966 channels.go:45] Running command: /opt/kops/bin/channels apply channel s3://clustername-kops-state-store/clustername.k8s.local/addons/bootstrap-channel.yaml --v=4 --yes
Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: I0822 12:28:06.072792    3966 channels.go:48] error running /opt/kops/bin/channels apply channel s3://clustername-kops-state-store/clustername.k8s.local/addons/bootstrap-channel.yaml --v=4 --yes:
Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: I0822 12:28:06.073134    3966 channels.go:49]
Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: error querying kubernetes version: Get "https://127.0.0.1/version": dial tcp 127.0.0.1:443: connect: connection refused
Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: I0822 12:28:06.073355    3966 channels.go:34] apply channel output was:
Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: error querying kubernetes version: Get "https://127.0.0.1/version": dial tcp 127.0.0.1:443: connect: connection refused
Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: W0822 12:28:06.073588    3966 kube_boot.go:89] error applying channel "s3://clustername-kops-state-store/clustername.k8s.local/addons/bootstrap-channel.yaml": error running channels: exit status 1
Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: I0822 12:28:06.073630    3966 labeler.go:37] Querying k8s for node "i-0e781db7abf2f95a1"
Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: W0822 12:28:06.074135    3966 kube_boot.go:94] error bootstrapping master node labels: error querying node "i-0e781db7abf2f95a1": Get "https://127.0.0.1/api/v1/nodes/i-0e781db7abf2f95a1": dial tcp 127.0.0.1:443: connect: connection refused
Aug 22 12:28:06 i-0e781db7abf2f95a1 protokube[3966]: I0822 12:28:06.514001    3966 dns.go:47] DNSView unchanged: 6
Aug 22 12:28:11 i-0e781db7abf2f95a1 protokube[3966]: I0822 12:28:11.514879    3966 dns.go:47] DNSView unchanged: 6
Aug 22 12:28:15 i-0e781db7abf2f95a1 protokube[3966]: I0822 12:28:15.639776    3966 peer.go:111] OnGossip &KVState{Records:map[string]*KVStateRecord{dns/local/AAAA/api.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096420,},dns/local/AAAA/kops-controller.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096390,},dns/local/NS/local: &KVStateRecord{Data:[103 111 115 115 105 112],Tombstone:false,Version:1661096307,},},} => delta empty
Aug 22 12:28:15 i-0e781db7abf2f95a1 protokube[3966]: I0822 12:28:15.780084    3966 peer.go:111] OnGossip &KVState{Records:map[string]*KVStateRecord{dns/local/AAAA/api.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096420,},dns/local/AAAA/kops-controller.internal.clustername.k8s.local: &KVStateRecord{Data:[50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 56 51 101 97 58 99 100 48 99 58 52 98 55 100 58 101 55 100 97 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 50 48 58 102 97 102 50 58 52 98 101 52 58 49 102 50 56 58 53 54 99 53 44 50 54 48 48 58 49 102 49 56 58 49 48 52 54 58 101 57 51 48 58 55 50 49 52 58 99 98 99 99 58 97 57 49 52 58 51 97 57],Tombstone:false,Version:1661096390,},dns/local/NS/local: &KVStateRecord{Data:[103 111 115 115 105 112],Tombstone:false,Version:1661096307,},},} => delta empty
IgalSc commented 2 years ago

Actually not sure how well ipv6 is tested when using gossip-based clusters.

Looks like that's exactly the issue, @olemarkus The moment I switched from Gossip DNS to AWS hosted zone, the cluster validated. Looks like a bug in either implementation or documentation or both?

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

johngmyers commented 1 year ago

We don't appear to have any e2e tests on AWS gossip, much less any IPv6 ones.

I'm not sure what the use case is for IPv6 Gossip; if you're big enough to want IPv6, you can probably afford a Route53 zone. But if we're not going to support it, we should add an API validation against the combination.

/remove-lifecycle stale

olemarkus commented 1 year ago

I suggest not supporting and blocking it until there is a case that can't use --dns=none

IgalSc commented 1 year ago

We don't appear to have any e2e tests on AWS gossip, much less any IPv6 ones.

I'm not sure what the use case is for IPv6 Gossip; if you're big enough to want IPv6, you can probably afford a Route53 zone. But if we're not going to support it, we should add an API validation against the combination.

/remove-lifecycle stale

it's not about being big to require IPv6, it's about the requirement of being able to handle IPv6 requests from embedded devices that communicate over IPv6 only. You cannot add IPv6 or Dual-stack load balancer if your cluster is created as IPv4 only

johngmyers commented 1 year ago

I was under the impression that you could use a dualstack load balancer with an IPv4-only cluster, as long as your utility subnets are dual-stack.

IgalSc commented 1 year ago

@johngmyers How do you create these subnets using KOPS? Or do you suggest using "shared VPC" with existing subnets?

IgalSc commented 1 year ago

@johngmyers also, as per https://github.com/kubernetes/kops/issues/14204#issuecomment-1232690138, "You have to use load balancer controller addon and then an NLB with the dualstack annotation (not the ipfamily* fields)."

johngmyers commented 1 year ago

I was suggesting using "shared subnets" that were provisioned externally.

And yes, you would have to use LBC to provision those load balancers.

For context, kOps is looking into deprecating Gossip, moving those use cases to none-DNS.

IgalSc commented 1 year ago

thank you @johngmyers I switched to DNS-based cluster, but it still fails if use IPv6 and Calico I'll try IPv4 cluster in a shared VPC/subnets with IPv6

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/kops/issues/14149#issuecomment-1537254346): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.