loft-sh / vcluster

vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
https://www.vcluster.com
Apache License 2.0
6.15k stars 369 forks source link

Syncer Container crashes in Minikube environemnt #146

Open sam-sre opened 2 years ago

sam-sre commented 2 years ago

Hi,

On my other issue I had the same problems, the environment was Kubernetes v1.20 (kubeadm) cluster running within Vagrant boxes

I wanted to check if the environment is causing these problems, so I changed to Minikube running within a VM.

Environment:

Outputs and Logs

kg po -n host-namespace-1
NAME           READY   STATUS             RESTARTS       AGE
vcluster-1-0   1/2     CrashLoopBackOff   26 (44s ago)   38m
k describe pod/vcluster-1-0 -n host-namespace-1
Name:         vcluster-1-0
Namespace:    host-namespace-1
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Tue, 12 Oct 2021 07:42:59 -0700
Labels:       app=vcluster
              controller-revision-hash=vcluster-1-75557ffc7d
              release=vcluster-1
              statefulset.kubernetes.io/pod-name=vcluster-1-0
Annotations:  <none>
Status:       Running
IP:           172.17.0.5
IPs:
  IP:           172.17.0.5
Controlled By:  StatefulSet/vcluster-1
Containers:
  vcluster:
    Container ID:  docker://a250f81556227d422dfa15c96760a2e70b9c6c674cfcb5a1beb0c13ac85b773a
    Image:         rancher/k3s:v1.22.1-rc1-k3s1
    Image ID:      docker-pullable://rancher/k3s@sha256:809515947dd3630fc4f8a72f277328d8cfe0ee08a8e5a0c59769e0f4f2644b5b
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/k3s
    Args:
      server
      --write-kubeconfig=/k3s-config/kube-config.yaml
      --data-dir=/data
      --disable=traefik,servicelb,metrics-server,local-storage
      --disable-network-policy
      --disable-agent
      --disable-scheduler
      --disable-cloud-controller
      --flannel-backend=none
      --kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle
      --service-cidr=10.96.0.0/12
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 12 Oct 2021 08:20:22 -0700
      Finished:     Tue, 12 Oct 2021 08:20:26 -0700
    Ready:          False
    Restart Count:  12
    Limits:
      memory:  2Gi
    Requests:
      cpu:        200m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6wsqc (ro)
  syncer:
    Container ID:  docker://0114df85df05a028e8f4073d6a798ebba7e7cbc86eb2e6847e9340eae2cf9a6c
    Image:         loftsh/vcluster:0.4.1
    Image ID:      docker-pullable://loftsh/vcluster@sha256:a0cfe246a6de94e0ee67d4cbad4fbd6a4136cdfe789571d46bb09853d476705d
    Port:          <none>
    Host Port:     <none>
    Args:
      --service-name=vcluster-1
      --suffix=vcluster-1
      --owning-statefulset=vcluster-1
      --out-kube-config-secret=vc-vcluster-1
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Tue, 12 Oct 2021 08:19:54 -0700
      Finished:     Tue, 12 Oct 2021 08:21:14 -0700
    Ready:          False
    Restart Count:  14
    Limits:
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Liveness:     http-get https://:8443/healthz delay=60s timeout=1s period=2s #success=1 #failure=10
    Readiness:    http-get https://:8443/readyz delay=0s timeout=1s period=2s #success=1 #failure=30
    Environment:  <none>
    Mounts:
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6wsqc (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-vcluster-1-0
    ReadOnly:   false
  kube-api-access-6wsqc:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  40m                  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         40m                  default-scheduler  Successfully assigned host-namespace-1/vcluster-1-0 to minikube
  Normal   Pulling           40m                  kubelet            Pulling image "rancher/k3s:v1.22.1-rc1-k3s1"
  Normal   Pulled            40m                  kubelet            Successfully pulled image "rancher/k3s:v1.22.1-rc1-k3s1" in 12.332461136s
  Normal   Pulling           40m                  kubelet            Pulling image "loftsh/vcluster:0.4.1"
  Normal   Started           40m                  kubelet            Started container syncer
  Normal   Created           40m                  kubelet            Created container syncer
  Normal   Pulled            40m                  kubelet            Successfully pulled image "loftsh/vcluster:0.4.1" in 8.987138409s
  Normal   Started           40m (x2 over 40m)    kubelet            Started container vcluster
  Normal   Created           40m (x2 over 40m)    kubelet            Created container vcluster
  Normal   Pulled            40m                  kubelet            Container image "rancher/k3s:v1.22.1-rc1-k3s1" already present on machine
  Warning  Unhealthy         39m (x8 over 39m)    kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500
  Warning  Unhealthy         10m (x447 over 40m)  kubelet            Readiness probe failed: Get "https://172.17.0.5:8443/readyz": dial tcp 172.17.0.5:8443: connect: connection refused
  Warning  BackOff           20s (x219 over 39m)  kubelet            Back-off restarting failed container
k logs vcluster-1-0 -c syncer -n host-namespace-1
I1012 15:04:35.213523       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:36.194233       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:37.199019       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:38.196370       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:39.194958       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:40.199829       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:41.199651       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:42.199147       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:43.196107       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:44.198271       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:45.199781       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:46.194389       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:47.196572       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:48.194757       1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds
I1012 15:04:51.022238       1 main.go:234] Using physical cluster at https://10.96.0.1:443
I1012 15:04:51.052488       1 main.go:265] Can connect to virtual cluster with version v1.22.1-rc1+k3s1
I1012 15:04:51.160674       1 leaderelection.go:243] attempting to acquire leader lease host-namespace-1/vcluster-vcluster-1-controller...
I1012 15:04:51.161313       1 plugins.go:158] Loaded 1 mutating admission controller(s) successfully in the following order: MutatingAdmissionWebhook.
I1012 15:04:51.161335       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I1012 15:04:51.181490       1 leaderelection.go:253] successfully acquired lease host-namespace-1/vcluster-vcluster-1-controller
I1012 15:04:51.183136       1 leaderelection.go:68] Acquired leadership and run vcluster in leader mode
I1012 15:04:51.183868       1 leaderelection.go:31] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"host-namespace-1", Name:"vcluster-vcluster-1-controller", UID:"a7b18768-1b1c-4b89-b5b8-dfae2b79ad25", APIVersion:"v1", ResourceVersion:"1806", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' vcluster-1-0-external-vcluster-controller became leader
E1012 15:04:51.185050       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"vcluster-vcluster-1-controller.16ad50cf52b627f5", GenerateName:"", Namespace:"host-namespace-1", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"ConfigMap", Namespace:"host-namespace-1", Name:"vcluster-vcluster-1-controller", UID:"a7b18768-1b1c-4b89-b5b8-dfae2b79ad25", APIVersion:"v1", ResourceVersion:"1806", FieldPath:""}, Reason:"LeaderElection", Message:"vcluster-1-0-external-vcluster-controller became leader", Source:v1.EventSource{Component:"vcluster", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc05186e4cad129f5, ext:17031191098, loc:(*time.Location)(0x3221280)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc05186e4cad129f5, ext:17031191098, loc:(*time.Location)(0x3221280)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:serviceaccount:host-namespace-1:vc-vcluster-1" cannot create resource "events" in API group "" in the namespace "host-namespace-1"' (will not retry!)
I1012 15:04:51.231338       1 loghelper.go:53] Start priorityclasses sync controller
I1012 15:04:51.231596       1 loghelper.go:53] Start configmaps sync controller
I1012 15:04:51.232666       1 loghelper.go:53] Start secrets sync controller
I1012 15:04:51.232806       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &source.Kind{Type:(*v1.ConfigMap)(0xc000183b80), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.232873       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc00071dfc0), run:(*generic.forwardController)(0xc000595980), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.232888       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc0002bd800), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.232893       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-forward: Starting Controller
I1012 15:04:51.232982       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-forward: Starting workers worker count 1
I1012 15:04:51.233236       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-backward: Starting EventSource source &source.Kind{Type:(*v1.ConfigMap)(0xc000183cc0), cache:(*cache.informerCache)(0xc000286bd0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.233283       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc00075e100), run:(*generic.backwardController)(0xc000595da0), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.233295       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-backward: Starting Controller
I1012 15:04:51.236150       1 loghelper.go:53] Start endpoints sync controller
I1012 15:04:51.239676       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(*v1.Secret)(0xc00062cb40), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.239730       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc0008a3730), run:(*generic.forwardController)(0xc000321ce0), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.239745       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(*v1.Ingress)(0xc000cc3c80), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.239750       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-backward: Starting EventSource source &source.Kind{Type:(*v1.Secret)(0xc00062d180), cache:(*cache.informerCache)(0xc000286bd0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.239760       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000580c00), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.239763       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting Controller
I1012 15:04:51.239773       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc0008a3840), run:(*generic.backwardController)(0xc000321ec0), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.239780       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-backward: Starting Controller
I1012 15:04:51.240058       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting workers worker count 1
I1012 15:04:51.240598       1 loghelper.go:53] Start pods sync controller
I1012 15:04:51.241486       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-forward: Starting EventSource source &source.Kind{Type:(*v1.Endpoints)(0xc00062d540), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.241573       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc0009d6ac0), run:(*generic.forwardController)(0xc0003bf980), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.241647       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-forward: Starting Controller
I1012 15:04:51.241779       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-forward: Starting workers worker count 1
I1012 15:04:51.242203       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: Starting EventSource source &source.Kind{Type:(*v1.Endpoints)(0xc00062d900), cache:(*cache.informerCache)(0xc000286bd0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.242285       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc0009d6b60), run:(*generic.backwardController)(0xc0003bfaa0), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.242334       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: Starting Controller
I1012 15:04:51.246408       1 loghelper.go:53] Start persistentvolumeclaims sync controller
I1012 15:04:51.246679       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-forward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000581000), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.246714       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc001187ed0), run:(*generic.forwardController)(0xc0011d3e60), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.246719       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-forward: Starting Controller
I1012 15:04:51.246836       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-forward: Starting workers worker count 1
I1012 15:04:51.247132       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-backward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000581400), cache:(*cache.informerCache)(0xc000286bd0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.247228       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc001187f70), run:(*generic.backwardController)(0xc0011d3f80), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.247319       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-backward: Starting Controller
I1012 15:04:51.248180       1 loghelper.go:53] Start storageclasses sync controller
I1012 15:04:51.248231       1 loghelper.go:53] Start services sync controller
I1012 15:04:51.248308       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolumeClaim)(0xc0003eefc0), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.248349       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc00105a350), run:(*generic.forwardController)(0xc001053080), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.248355       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting Controller
I1012 15:04:51.248430       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting workers worker count 1
I1012 15:04:51.248495       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolumeClaim)(0xc0003ef180), cache:(*cache.informerCache)(0xc000286bd0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.248520       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc00105a3f0), run:(*generic.backwardController)(0xc0010531a0), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.248524       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting Controller
I1012 15:04:51.251541       1 loghelper.go:53] Start events sync controller
I1012 15:04:51.251648       1 loghelper.go:53] Start ingresses sync controller
I1012 15:04:51.251850       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-forward: Starting EventSource source &source.Kind{Type:(*v1.Service)(0xc000570000), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.251946       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc001108190), run:(*generic.forwardController)(0xc00011db60), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.252019       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-forward: Starting Controller
I1012 15:04:51.252081       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-forward: Starting workers worker count 1
I1012 15:04:51.251954       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-backward: Starting EventSource source &source.Kind{Type:(*v1.Service)(0xc000570780), cache:(*cache.informerCache)(0xc000286bd0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.252166       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc0011083d0), run:(*generic.backwardController)(0xc00011dec0), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.252203       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-backward: Starting Controller
I1012 15:04:51.251899       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Event: controller: event-backward: Starting EventSource source &source.Kind{Type:(*v1.Event)(0xc000570c80), cache:(*cache.informerCache)(0xc000286bd0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.252301       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Event: controller: event-backward: Starting Controller
I1012 15:04:51.253307       1 loghelper.go:53] Start nodes sync controller
I1012 15:04:51.253344       1 loghelper.go:53] Start persistentvolumes sync controller
I1012 15:04:51.253551       1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting EventSource source &source.Kind{Type:(*v1.Ingress)(0xc000960a80), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.253644       1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000e13c70), run:(*generic.forwardController)(0xc00013c300), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.253717       1 controller.go:173] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting Controller
I1012 15:04:51.253883       1 controller.go:207] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting workers worker count 1
I1012 15:04:51.254023       1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting EventSource source &source.Kind{Type:(*v1.Ingress)(0xc0000ec000), cache:(*cache.informerCache)(0xc000286bd0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.254120       1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000e13d20), run:(*generic.backwardController)(0xc00013c420), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.254157       1 controller.go:173] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting Controller
I1012 15:04:51.254442       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node-syncer: Starting EventSource source &source.Kind{Type:(*v1.Node)(0xc00015ac00), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.255550       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node-syncer: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000e13df0), run:(*generic.fakeSyncer)(0xc000970cc0), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.255621       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node-syncer: Starting Controller
I1012 15:04:51.254538       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolume)(0xc0000d6000), cache:(*cache.informerCache)(0xc00013afb0), started:(chan error)(nil), startCancel:(func())(nil)}
I1012 15:04:51.255840       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000e13eb0), run:(*generic.fakeSyncer)(0xc000970f30), stopChan:(<-chan struct {})(0xc000c0b080)}
I1012 15:04:51.255846       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting Controller
I1012 15:04:51.333883       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-backward: Starting workers worker count 1
I1012 15:04:51.340766       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-backward: Starting workers worker count 1
I1012 15:04:51.343574       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: Starting workers worker count 1
I1012 15:04:51.347634       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-backward: Starting workers worker count 1
I1012 15:04:51.349687       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting workers worker count 1
E1012 15:04:51.350027       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again
I1012 15:04:51.352533       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-backward: Starting workers worker count 1
I1012 15:04:51.352606       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Event: controller: event-backward: Starting workers worker count 1
I1012 15:04:51.355392       1 controller.go:207] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting workers worker count 1
I1012 15:04:51.356682       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting workers worker count 1
I1012 15:04:51.356709       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node-syncer: Starting workers worker count 1
I1012 15:04:51.467364       1 server.go:172] Starting tls proxy server at 0.0.0.0:8443
I1012 15:04:51.469203       1 dynamic_cafile_content.go:167] Starting request-header::/data/server/tls/request-header-ca.crt
I1012 15:04:51.469277       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/data/server/tls/client-ca.crt
I1012 15:04:51.469341       1 syncer.go:170] Generating serving cert for service ips: [10.101.159.59]
I1012 15:04:51.470288       1 secure_serving.go:197] Serving securely on [::]:8443
I1012 15:04:51.470418       1 tlsconfig.go:240] Starting DynamicServingCertificateController
W1012 15:04:52.691342       1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of *v1.Secret ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received
W1012 15:04:52.691344       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ValidatingWebhookConfiguration ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
W1012 15:04:52.691446       1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of *v1.Endpoints ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received
W1012 15:04:52.691460       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
W1012 15:04:52.691610       1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of *v1.Pod ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received
W1012 15:04:52.691681       1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of *v1.ConfigMap ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received
E1012 15:04:53.102726       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.108754       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.119813       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.140599       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.181179       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.262277       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.425366       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.532468       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.662574       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://127.0.0.1:6444/api/v1/endpoints?resourceVersion=339": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.747028       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.752853       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=336": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.834186       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://127.0.0.1:6444/api/v1/secrets?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.873040       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://127.0.0.1:6444/api/v1/configmaps?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:53.996482       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:54.160220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:54.388696       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:55.289936       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:55.614293       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://127.0.0.1:6444/api/v1/configmaps?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:55.669442       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:55.992188       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:55.997830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=336": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:56.117430       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://127.0.0.1:6444/api/v1/secrets?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:56.127775       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:56.654294       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://127.0.0.1:6444/api/v1/endpoints?resourceVersion=339": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:57.992654       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:58.230981       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:59.846125       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:59.956843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=336": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:04:59.993107       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:00.207718       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:00.260487       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://127.0.0.1:6444/api/v1/configmaps?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:01.992446       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:02.268604       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://127.0.0.1:6444/api/v1/secrets?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:02.850368       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://127.0.0.1:6444/api/v1/endpoints?resourceVersion=339": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:03.353710       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:03.992483       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:05.991124       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:07.991520       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:09.991984       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:10.579794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=336": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:11.326902       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:11.672467       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://127.0.0.1:6444/api/v1/configmaps?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:11.991788       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:12.265669       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://127.0.0.1:6444/api/v1/secrets?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:12.588140       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:12.605975       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://127.0.0.1:6444/api/v1/endpoints?resourceVersion=339": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:13.595859       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:13.994471       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:15.992017       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:17.992640       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:19.991677       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:21.991152       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:23.991252       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:25.997767       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:26.734839       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:27.991113       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:28.835004       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=336": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:29.536320       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://127.0.0.1:6444/api/v1/configmaps?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:29.809134       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://127.0.0.1:6444/api/v1/endpoints?resourceVersion=339": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:29.993975       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:30.901043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:31.991537       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:33.991569       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:34.077605       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:35.584723       1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://127.0.0.1:6444/api/v1/secrets?resourceVersion=329": dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:35.990862       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:35.990906       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:37.993084       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:37.993111       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:39.990875       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:39.990945       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:41.992767       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:41.992813       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:43.998323       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:43.999397       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:45.993585       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:45.993651       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:47.998311       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:47.998367       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:49.997122       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:49.998339       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:51.994414       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:51.995208       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:52.000536       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:53.990969       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:53.991004       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
E1012 15:05:53.992925       1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused
FabianKramm commented 2 years ago

@anasalloush thanks for creating this issue! Could you also post the vcluster logs in here?

sam-sre commented 2 years ago

Hi @FabianKramm

k logs vcluster-1-0 -c vcluster -n host-namespace-1

time="2021-10-13T07:42:45Z" level=info msg="Starting k3s v1.22.1-rc1+k3s1 (58315fe1)"
time="2021-10-13T07:42:45Z" level=info msg="Cluster bootstrap already complete"
time="2021-10-13T07:42:45Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-10-13T07:42:45Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-10-13T07:42:45Z" level=info msg="Database tables and indexes are up to date"
time="2021-10-13T07:42:45Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-10-13T07:42:45Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/data/server/tls/temporary-certs --client-ca-file=/data/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/data/server/tls/server-ca.crt --kubelet-client-certificate=/data/server/tls/client-kube-apiserver.crt --kubelet-client-key=/data/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/data/server/tls/client-auth-proxy.crt --proxy-client-key-file=/data/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/data/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/server/tls/service.key --service-account-signing-key-file=/data/server/tls/service.key --service-cluster-ip-range=10.96.0.0/12 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/data/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/data/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1013 07:42:45.087893       1 server.go:581] external host was not specified, using 172.17.0.5
I1013 07:42:45.088102       1 server.go:175] Version: v1.22.1-rc1+k3s1
I1013 07:42:45.092577       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1013 07:42:45.092606       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1013 07:42:45.092759       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1013 07:42:45.094213       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1013 07:42:45.094258       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W1013 07:42:45.109414       1 genericapiserver.go:455] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I1013 07:42:45.110292       1 instance.go:278] Using reconciler: lease
I1013 07:42:45.149661       1 rest.go:130] the default service ipfamily for this cluster is: IPv4
W1013 07:42:45.503441       1 genericapiserver.go:455] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W1013 07:42:45.504979       1 genericapiserver.go:455] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W1013 07:42:45.514261       1 genericapiserver.go:455] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W1013 07:42:45.515374       1 genericapiserver.go:455] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W1013 07:42:45.519266       1 genericapiserver.go:455] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W1013 07:42:45.521243       1 genericapiserver.go:455] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1013 07:42:45.525727       1 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W1013 07:42:45.525751       1 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1013 07:42:45.526779       1 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W1013 07:42:45.526802       1 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1013 07:42:45.529570       1 genericapiserver.go:455] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1013 07:42:45.531153       1 genericapiserver.go:455] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1013 07:42:45.534919       1 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources.
W1013 07:42:45.534943       1 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources.
W1013 07:42:45.536624       1 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
I1013 07:42:45.540135       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1013 07:42:45.540157       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W1013 07:42:45.546512       1 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
time="2021-10-13T07:42:45Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/data/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/data/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/data/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/data/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/data/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/data/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/data/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/data/server/tls/client-ca.key --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle --kubeconfig=/data/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/data/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/data/server/tls/service.key --use-service-account-credentials=true"
time="2021-10-13T07:42:45Z" level=info msg="Waiting for API server to become available"
time="2021-10-13T07:42:45Z" level=info msg="Node token is available at /data/server/token"
time="2021-10-13T07:42:45Z" level=info msg="To join node to cluster: k3s agent -s https://172.17.0.5:6443 -t ${NODE_TOKEN}"
time="2021-10-13T07:42:45Z" level=info msg="Wrote kubeconfig /k3s-config/kube-config.yaml"
time="2021-10-13T07:42:45Z" level=info msg="Run: k3s kubectl"
I1013 07:42:46.715416       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/data/server/tls/request-header-ca.crt"
I1013 07:42:46.715456       1 secure_serving.go:266] Serving securely on 127.0.0.1:6444
I1013 07:42:46.715472       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/data/server/tls/client-ca.crt"
I1013 07:42:46.715486       1 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/data/server/tls/serving-kube-apiserver.crt::/data/server/tls/serving-kube-apiserver.key"
I1013 07:42:46.715507       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1013 07:42:46.715534       1 available_controller.go:491] Starting AvailableConditionController
I1013 07:42:46.715538       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1013 07:42:46.715953       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1013 07:42:46.715973       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1013 07:42:46.716024       1 autoregister_controller.go:141] Starting autoregister controller
I1013 07:42:46.716030       1 cache.go:32] Waiting for caches to sync for autoregister controller
I1013 07:42:46.716083       1 controller.go:83] Starting OpenAPI AggregationController
I1013 07:42:46.716177       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I1013 07:42:46.716209       1 controller.go:85] Starting OpenAPI controller
I1013 07:42:46.716219       1 naming_controller.go:291] Starting NamingConditionController
I1013 07:42:46.716226       1 establishing_controller.go:76] Starting EstablishingController
I1013 07:42:46.716233       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1013 07:42:46.716264       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1013 07:42:46.716404       1 crd_finalizer.go:266] Starting CRDFinalizer
I1013 07:42:46.717922       1 apf_controller.go:299] Starting API Priority and Fairness config controller
I1013 07:42:46.718265       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1013 07:42:46.718288       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1013 07:42:46.718334       1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/data/server/tls/client-auth-proxy.crt::/data/server/tls/client-auth-proxy.key"
I1013 07:42:46.719087       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1013 07:42:46.719118       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1013 07:42:46.721584       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/data/server/tls/client-ca.crt"
I1013 07:42:46.721632       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/data/server/tls/request-header-ca.crt"
W1013 07:42:46.729978       1 controller.go:292] Resetting master service "kubernetes" to &v1.Service{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4673ebf3-0137-403c-9d39-3cf0b1221b15", ResourceVersion:"443", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769646606, loc:(*time.Location)(0x7fbe9e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"component":"apiserver", "provider":"kubernetes"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"vcluster", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00334f518), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00334f548), Subresource:""}}}, Spec:v1.ServiceSpec{Ports:[]v1.ServicePort{v1.ServicePort{Name:"https", Protocol:"TCP", AppProtocol:(*string)(nil), Port:443, TargetPort:intstr.IntOrString{Type:0, IntVal:6443, StrVal:""}, NodePort:0}}, Selector:map[string]string(nil), ClusterIP:"10.101.159.59", ClusterIPs:[]string{"10.101.159.59"}, Type:"ClusterIP", ExternalIPs:[]string(nil), SessionAffinity:"None", LoadBalancerIP:"", LoadBalancerSourceRanges:[]string(nil), ExternalName:"", ExternalTrafficPolicy:"", HealthCheckNodePort:0, PublishNotReadyAddresses:false, SessionAffinityConfig:(*v1.SessionAffinityConfig)(nil), IPFamilies:[]v1.IPFamily{"IPv4"}, IPFamilyPolicy:(*v1.IPFamilyPolicyType)(0xc00865be50), AllocateLoadBalancerNodePorts:(*bool)(nil), LoadBalancerClass:(*string)(nil), InternalTrafficPolicy:(*v1.ServiceInternalTrafficPolicyType)(0xc00865be90)}, Status:v1.ServiceStatus{LoadBalancer:v1.LoadBalancerStatus{Ingress:[]v1.LoadBalancerIngress(nil)}, Conditions:[]v1.Condition(nil)}}
W1013 07:42:46.747201       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.17.0.5]
I1013 07:42:46.748338       1 controller.go:611] quota admission added evaluator for: endpoints
I1013 07:42:46.752568       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
E1013 07:42:46.764289       1 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1013 07:42:46.792920       1 shared_informer.go:247] Caches are synced for node_authorizer 
I1013 07:42:46.815977       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1013 07:42:46.816135       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1013 07:42:46.816201       1 cache.go:39] Caches are synced for autoregister controller
I1013 07:42:46.818356       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I1013 07:42:46.818465       1 apf_controller.go:304] Running API Priority and Fairness config worker
I1013 07:42:46.819223       1 shared_informer.go:247] Caches are synced for crd-autoregister 
I1013 07:42:47.715319       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1013 07:42:47.727627       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
W1013 07:42:47.993737       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.17.0.5]
I1013 07:42:48.153962       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
time="2021-10-13T07:42:48Z" level=info msg="Kube API server is now running"
time="2021-10-13T07:42:48Z" level=info msg="k3s is up and running"
time="2021-10-13T07:42:48Z" level=warning msg="Deploy controller node name is empty or too long, and will not be tracked via server side apply field management"
time="2021-10-13T07:42:48Z" level=info msg="Applying CRD addons.k3s.cattle.io"
time="2021-10-13T07:42:48Z" level=info msg="Applying CRD helmcharts.helm.cattle.io"
time="2021-10-13T07:42:48Z" level=info msg="Applying CRD helmchartconfigs.helm.cattle.io"
time="2021-10-13T07:42:48Z" level=info msg="Writing static file: /data/server/static/charts/traefik-10.3.0.tgz"
time="2021-10-13T07:42:48Z" level=info msg="Writing static file: /data/server/static/charts/traefik-crd-10.3.0.tgz"
time="2021-10-13T07:42:48Z" level=info msg="Writing manifest: /data/server/manifests/coredns.yaml"
time="2021-10-13T07:42:48Z" level=info msg="Writing manifest: /data/server/manifests/rolebindings.yaml"
time="2021-10-13T07:42:48Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2021-10-13T07:42:48Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"32fe4dce-6389-4d44-963e-2b186e5ea0b5\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"225\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/data/server/manifests/coredns.yaml\""
time="2021-10-13T07:42:48Z" level=info msg="Cluster dns configmap already exists"
I1013 07:42:48.946001       1 controller.go:611] quota admission added evaluator for: deployments.apps
time="2021-10-13T07:42:48Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"32fe4dce-6389-4d44-963e-2b186e5ea0b5\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"225\", FieldPath:\"\"}): type: 'Warning' reason: 'ApplyManifestFailed' Applying manifest at \"/data/server/manifests/coredns.yaml\" failed: failed to update kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIPs[0]: Invalid value: []string{\"10.96.0.10\"}: may not change once set"
time="2021-10-13T07:42:48Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"2e1b7d72-bf81-48b6-a38c-adc37b735d33\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"235\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/data/server/manifests/rolebindings.yaml\""
time="2021-10-13T07:42:48Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"2e1b7d72-bf81-48b6-a38c-adc37b735d33\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"235\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/data/server/manifests/rolebindings.yaml\""
I1013 07:42:48.974171       1 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
time="2021-10-13T07:42:48Z" level=error msg="Failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to update kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIPs[0]: Invalid value: []string{\"10.96.0.10\"}: may not change once set"
time="2021-10-13T07:42:49Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting apps/v1, Kind=DaemonSet controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting apps/v1, Kind=Deployment controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting batch/v1, Kind=Job controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting /v1, Kind=Node controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting /v1, Kind=ConfigMap controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting /v1, Kind=ServiceAccount controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting /v1, Kind=Service controller"
time="2021-10-13T07:42:49Z" level=info msg="Starting /v1, Kind=Endpoints controller"
I1013 07:42:49.175145       1 serving.go:354] Generated self-signed cert in-memory
time="2021-10-13T07:42:49Z" level=info msg="Starting /v1, Kind=Secret controller"
W1013 07:42:49.363726       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:host-namespace-1:vc-vcluster-1" cannot get resource "configmaps" in API group "" in the namespace "kube-system"