loft-sh / vcluster

vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
https://www.vcluster.com
Apache License 2.0
6.38k stars 410 forks source link

vCluster Pods are in Pending / PVs are not created #145

Open sam-sre opened 3 years ago

sam-sre commented 3 years ago

My Environemnt:

I followed the quick guid steps to create a vCluster as in here . but the Pods were hanging in Pending state because there are no PVs created. As I understood from the Docs, the --fake-persistent-volumes option is true by default, meaning there is no need to creat PVs PVCs manulay right?

Here are the describe commands for the pods and their PVCs:

kubectl describe pod/vcluster-1-0 -n host-namespace-1

Events: Type Reason Age From Message

Warning FailedScheduling 9s (x7 over 5m14s) default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.

kubectl describe pvc data-vcluster-1-0 -n host-namespace-1

Name: data-vcluster-1-0 Namespace: host-namespace-1 StorageClass: Status: Pending Volume: Labels: app=vcluster release=vcluster-1 Annotations: Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: vcluster-1-0 Events: Type Reason Age From Message

Normal FailedBinding 14s (x26 over 6m16s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set

FabianKramm commented 3 years ago

@anasalloush thanks for creating this issue! Seems like your cluster doesn't have a default storage class or is unable to provision persistent volumes. What kind of kubernetes cluster are you using? You can disable persistent storage by using the following values.yaml:

storage:
  persistence: false

Alternatively you can specify your custom storage class with:

storage:
  className: my-storage-class

and then create the vcluster with:

vcluster create test -n test -f values.yaml
sam-sre commented 3 years ago

@FabianKramm Many thanks for the quick reply. I deployed my cluster with Kubeadm, and Calico is the CNI.

Yes I had already a NFS provisioner and a StorageClass but it wasn't the default StorageClass. Now after setting it as the default StorageClass, the PVCs are bound.

It would be helpful If you add a note in the Docs about this point. Thanks ^^

Now there seems to be another problem with the Readiness probes inside the syncer container. The Pod is in a continues CrashLoopBackOff state.

The Error showing:

Readiness probe failed: Get "https://192.168.2.71:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

192.168.2.71 is the vcluster-1-0 Pod IP

Is there a specific CIDR that should be used for Pods?

Here is the complete describe output:

kd po vcluster-1-0 -n host-namespace-1
Name:         vcluster-1-0
Namespace:    host-namespace-1
Priority:     0
Node:         kube2/172.16.16.101
Start Time:   Mon, 11 Oct 2021 01:43:40 -0700
Labels:       app=vcluster
              controller-revision-hash=vcluster-1-8c4df47d5
              release=vcluster-1
              statefulset.kubernetes.io/pod-name=vcluster-1-0
Annotations:  cni.projectcalico.org/containerID: 094fefc85eb78c0cfc75de701907f07d8b3e65be47cbac6ddf315b99d6fd0735
              cni.projectcalico.org/podIP: 192.168.2.71/32
              cni.projectcalico.org/podIPs: 192.168.2.71/32
Status:       Running
IP:           192.168.2.71
IPs:
  IP:           192.168.2.71
Controlled By:  StatefulSet/vcluster-1
Containers:
  vcluster:
    Container ID:  containerd://4c5d35881f47a3554efe7dcf27b80bcbcca293c4955f00992e51fc0f968e2e3c
    Image:         rancher/k3s:v1.20.9-k3s1
    Image ID:      docker.io/rancher/k3s@sha256:a238676be24f6f353eee4396d0312dc78c67678d7e540fe73a2db435a68361c6
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/k3s
    Args:
      server
      --write-kubeconfig=/k3s-config/kube-config.yaml
      --data-dir=/data
      --disable=traefik,servicelb,metrics-server,local-storage
      --disable-network-policy
      --disable-agent
      --disable-scheduler
      --disable-cloud-controller
      --flannel-backend=none
      --kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle
      --service-cidr=10.96.0.0/12
    State:          Running
      Started:      Mon, 11 Oct 2021 02:37:04 -0700
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Mon, 11 Oct 2021 02:28:59 -0700
      Finished:     Mon, 11 Oct 2021 02:32:03 -0700
    Ready:          True
    Restart Count:  9
    Limits:
      memory:  2Gi
    Requests:
      cpu:        200m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from vc-vcluster-1-token-9nqkl (ro)
  syncer:
    Container ID:  containerd://c91d412602f1b2f25360077aaa19880109d8501c57011fdd0896174283c01952
    Image:         loftsh/vcluster:0.4.1
    Image ID:      docker.io/loftsh/vcluster@sha256:a0cfe246a6de94e0ee67d4cbad4fbd6a4136cdfe789571d46bb09853d476705d
    Port:          <none>
    Host Port:     <none>
    Args:
      --service-name=vcluster-1
      --suffix=vcluster-1
      --owning-statefulset=vcluster-1
      --out-kube-config-secret=vc-vcluster-1
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Mon, 11 Oct 2021 04:12:41 -0700
      Finished:     Mon, 11 Oct 2021 04:14:01 -0700
    Ready:          False
    Restart Count:  24
    Limits:
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Liveness:     http-get https://:8443/healthz delay=60s timeout=1s period=2s #success=1 #failure=10
    Readiness:    http-get https://:8443/readyz delay=0s timeout=1s period=2s #success=1 #failure=30
    Environment:  <none>
    Mounts:
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from vc-vcluster-1-token-9nqkl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-vcluster-1-0
    ReadOnly:   false
  vc-vcluster-1-token-9nqkl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  vc-vcluster-1-token-9nqkl
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From     Message
  ----     ------     ----                  ----     -------
  Warning  Unhealthy  135m (x71 over 155m)  kubelet  Readiness probe failed: Get "https://192.168.2.71:8443/readyz": dial tcp 192.168.2.71:8443: connect: connection refused
  Normal   Pulled     110m (x8 over 153m)   kubelet  Container image "rancher/k3s:v1.20.9-k3s1" already present on machine
  Warning  Unhealthy  85m (x293 over 143m)  kubelet  Readiness probe failed: HTTP probe failed with statuscode: 403
  Normal   Pulled     80m (x19 over 155m)   kubelet  Container image "loftsh/vcluster:0.4.1" already present on machine
  Warning  Unhealthy  70m (x111 over 79m)   kubelet  Readiness probe failed: Get "https://192.168.2.71:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  6m36s (x44 over 79m)  kubelet  Liveness probe failed: Get "https://192.168.2.71:8443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  BackOff    91s (x261 over 155m)  kubelet  Back-off restarting failed container

Adding the syncer logs


k logs vcluster-1-0 -c syncer -n host-namespace-1
I1011 12:01:20.171753       1 main.go:234] Using physical cluster at https://10.96.0.1:443
I1011 12:01:20.232147       1 main.go:265] Can connect to virtual cluster with version v1.20.9+k3s1
I1011 12:01:20.350555       1 leaderelection.go:243] attempting to acquire leader lease host-namespace-1/vcluster-vcluster-1-controller...
I1011 12:01:20.363651       1 leaderelection.go:253] successfully acquired lease host-namespace-1/vcluster-vcluster-1-controller
I1011 12:01:20.363791       1 plugins.go:158] Loaded 1 mutating admission controller(s) successfully in the following order: MutatingAdmissionWebhook.
I1011 12:01:20.363819       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I1011 12:01:20.363842       1 leaderelection.go:68] Acquired leadership and run vcluster in leader mode
I1011 12:01:20.364490       1 leaderelection.go:31] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"host-namespace-1", Name:"vcluster-vcluster-1-controller", UID:"cfe6badd-ec90-4b9c-8718-3bd2f9233a70", APIVersion:"v1", ResourceVersion:"16559", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' vcluster-1-0-external-vcluster-controller became leader
E1011 12:01:20.369369       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"vcluster-vcluster-1-controller.16acf837198460db", GenerateName:"", Namespace:"host-namespace-1", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"ConfigMap", Namespace:"host-namespace-1", Name:"vcluster-vcluster-1-controller", UID:"cfe6badd-ec90-4b9c-8718-3bd2f9233a70", APIVersion:"v1", ResourceVersion:"16559", FieldPath:""}, Reason:"LeaderElection", Message:"vcluster-1-0-external-vcluster-controller became leader", Source:v1.EventSource{Component:"vcluster", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc05127c415acc0db, ext:2275066653, loc:(*time.Location)(0x3221280)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc05127c415acc0db, ext:2275066653, loc:(*time.Location)(0x3221280)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:serviceaccount:host-namespace-1:vc-vcluster-1" cannot create resource "events" in API group "" in the namespace "host-namespace-1"' (will not retry!)
I1011 12:01:20.734289       1 server.go:172] Starting tls proxy server at 0.0.0.0:8443
I1011 12:01:20.735256       1 dynamic_cafile_content.go:167] Starting request-header::/data/server/tls/request-header-ca.crt
I1011 12:01:20.735280       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/data/server/tls/client-ca.crt
I1011 12:01:20.735602       1 syncer.go:170] Generating serving cert for service ips: [10.104.108.28 172.16.16.11]
I1011 12:01:20.738490       1 secure_serving.go:197] Serving securely on [::]:8443
I1011 12:01:20.738525       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1011 12:01:22.762642       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:24.762547       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:26.762484       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:28.762397       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:30.975189       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:32.762559       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:34.767130       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:38.763014       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:40.762788       1 handler.go:48] Error while proxying request: context canceled
I1011 12:01:41.560061       1 loghelper.go:53] Start storageclasses sync controller
I1011 12:01:41.560153       1 loghelper.go:53] Start priorityclasses sync controller
I1011 12:01:41.560173       1 loghelper.go:53] Start endpoints sync controller
I1011 12:01:41.622506       1 loghelper.go:53] Start ingresses sync controller
I1011 12:01:41.622585       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-forward: Starting EventSource source &source.Kind{Type:(*v1.Endpoints)(0xc0000ec140), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.622725       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc00059d710), run:(*generic.forwardController)(0xc00101a3c0), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.622734       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-forward: Starting Controller
I1011 12:01:41.623181       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: Starting EventSource source &source.Kind{Type:(*v1.Endpoints)(0xc0000ec280), cache:(*cache.informerCache)(0xc000436080), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.623505       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-forward: Starting workers worker count 1
I1011 12:01:41.623949       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc00059d7c0), run:(*generic.backwardController)(0xc00101a4e0), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.624039       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: Starting Controller
I1011 12:01:41.648205       1 loghelper.go:53] Start nodes sync controller
I1011 12:01:41.648265       1 loghelper.go:53] Start persistentvolumes sync controller
I1011 12:01:41.648279       1 loghelper.go:53] Start events sync controller
I1011 12:01:41.648289       1 loghelper.go:53] Start persistentvolumeclaims sync controller
I1011 12:01:41.648816       1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting EventSource source &source.Kind{Type:(*v1.Ingress)(0xc00084d800), cache:(*cache.informerCache)(0xc000436080), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.649018       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolume)(0xc00004f900), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.649046       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000c91800), run:(*generic.fakeSyncer)(0xc0009b3ec0), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.649293       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting Controller
I1011 12:01:41.648849       1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000c91620), run:(*generic.backwardController)(0xc00103de60), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.649570       1 controller.go:173] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting Controller
I1011 12:01:41.649165       1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting EventSource source &source.Kind{Type:(*v1.Ingress)(0xc00084d680), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.650224       1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000c91570), run:(*generic.forwardController)(0xc00103dd40), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.650244       1 controller.go:173] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting Controller
I1011 12:01:41.649226       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node-syncer: Starting EventSource source &source.Kind{Type:(*v1.Node)(0xc000e04c00), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.650288       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node-syncer: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000c916f0), run:(*generic.fakeSyncer)(0xc0009b3860), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.650299       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node-syncer: Starting Controller
I1011 12:01:41.649260       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Event: controller: event-backward: Starting EventSource source &source.Kind{Type:(*v1.Event)(0xc0001ca280), cache:(*cache.informerCache)(0xc000436080), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.650346       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Event: controller: event-backward: Starting Controller
I1011 12:01:41.669804       1 loghelper.go:53] Start services sync controller
I1011 12:01:41.670706       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolumeClaim)(0xc000355180), cache:(*cache.informerCache)(0xc000436080), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.670781       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000a3aa50), run:(*generic.backwardController)(0xc00118ed80), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.670789       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting Controller
I1011 12:01:41.670707       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolumeClaim)(0xc000354fc0), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.670903       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000a3a9b0), run:(*generic.forwardController)(0xc00118ec60), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.670917       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting Controller
I1011 12:01:41.671547       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting workers worker count 1
I1011 12:01:41.685988       1 loghelper.go:53] Start configmaps sync controller
I1011 12:01:41.686753       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-forward: Starting EventSource source &source.Kind{Type:(*v1.Service)(0xc000430780), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.686963       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000a10420), run:(*generic.forwardController)(0xc0002c8ea0), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.686986       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-forward: Starting Controller
I1011 12:01:41.686786       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-backward: Starting EventSource source &source.Kind{Type:(*v1.Service)(0xc000431400), cache:(*cache.informerCache)(0xc000436080), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.687258       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000a104c0), run:(*generic.backwardController)(0xc0002c91a0), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.687264       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-backward: Starting Controller
I1011 12:01:41.687581       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-backward: Starting workers worker count 1
I1011 12:01:41.687787       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service-forward: Starting workers worker count 1
I1011 12:01:41.692959       1 loghelper.go:53] Start secrets sync controller
I1011 12:01:41.693161       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &source.Kind{Type:(*v1.ConfigMap)(0xc0006e4140), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.693479       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000a11b30), run:(*generic.forwardController)(0xc000d611a0), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.693505       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc00051f800), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.693521       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-forward: Starting Controller
I1011 12:01:41.693855       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-forward: Starting workers worker count 1
I1011 12:01:41.694203       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-backward: Starting EventSource source &source.Kind{Type:(*v1.ConfigMap)(0xc0006e4280), cache:(*cache.informerCache)(0xc000436080), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.694615       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000a11c00), run:(*generic.backwardController)(0xc000d61320), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.694629       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-backward: Starting Controller
I1011 12:01:41.724976       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints-backward: Starting workers worker count 1
I1011 12:01:41.731858       1 loghelper.go:53] Start pods sync controller
I1011 12:01:41.732117       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-backward: Starting EventSource source &source.Kind{Type:(*v1.Secret)(0xc000464dc0), cache:(*cache.informerCache)(0xc000436080), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.732129       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc0009adf50), run:(*generic.backwardController)(0xc0002269c0), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.732134       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-backward: Starting Controller
I1011 12:01:41.731939       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(*v1.Secret)(0xc000464a00), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.732228       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc0009ade50), run:(*generic.forwardController)(0xc000226720), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.732239       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(*v1.Ingress)(0xc000a45380), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.732387       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000101800), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.732398       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting Controller
I1011 12:01:41.732441       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-forward: Starting workers worker count 1
I1011 12:01:41.749714       1 controller.go:207] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting workers worker count 1
I1011 12:01:41.751257       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting workers worker count 1
I1011 12:01:41.751417       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Event: controller: event-backward: Starting workers worker count 1
I1011 12:01:41.753392       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node-syncer: Starting workers worker count 1
I1011 12:01:41.753443       1 controller.go:207] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting workers worker count 1
I1011 12:01:41.771297       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting workers worker count 1
I1011 12:01:41.795178       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap-backward: Starting workers worker count 1
I1011 12:01:41.823202       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-forward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000b48000), cache:(*cache.informerCache)(0xc000436188), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.824186       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000350230), run:(*generic.forwardController)(0xc000ba20c0), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.824667       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-forward: Starting Controller
I1011 12:01:41.824751       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-forward: Starting workers worker count 1
I1011 12:01:41.824152       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-backward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000b48400), cache:(*cache.informerCache)(0xc000436080), started:(chan error)(nil), startCancel:(func())(nil)}
I1011 12:01:41.826062       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(*loghelper.logger)(0xc000350440), run:(*generic.backwardController)(0xc000ba2240), stopChan:(<-chan struct {})(0xc00014fe00)}
I1011 12:01:41.826602       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-backward: Starting Controller
I1011 12:01:41.833803       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret-backward: Starting workers worker count 1
I1011 12:01:41.925777       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod-backward: Starting workers worker count 1
E1011 12:01:42.763104       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:44.764737       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:48.764026       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:50.762662       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:52.763794       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:54.762427       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:56.762950       1 handler.go:48] Error while proxying request: context canceled
E1011 12:01:58.762725       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:00.766063       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:02.763138       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:04.764365       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:06.763898       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:08.764393       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:10.764881       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:12.763135       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:14.764577       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:16.932473       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:18.763986       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:19.652684       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:20.762902       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:21.645204       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:22.762606       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:23.645385       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:24.763769       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:25.645593       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:26.991434       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:27.647614       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:28.762893       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:29.645497       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:30.763195       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:31.647459       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:32.762145       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:33.651975       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:34.763660       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:35.656507       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:36.763570       1 handler.go:48] Error while proxying request: context canceled
E1011 12:02:37.651951       1 handler.go:48] Error while proxying request: context canceled

BR Anas

FabianKramm commented 3 years ago

@anasalloush thanks for the infos, would you mind adding the vcluster container logs as well? Seems like there might be a problem with the persistent storage or it might be slow which causes problems with k3s

sam-sre commented 3 years ago

@FabianKramm Thanks for the quick response.

Here are some extra information alongside the vcluster container logs.

kg nodes -o wide
NAME    STATUS   ROLES                  AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION    CONTAINER-RUNTIME
kube1   Ready    control-plane,master   11m   v1.20.11   172.16.16.100   <none>        Debian GNU/Linux 10 (buster)   4.19.0-16-amd64   containerd://1.4.11
kube2   Ready    <none>                 10m   v1.20.11   172.16.16.101   <none>        Debian GNU/Linux 10 (buster)   4.19.0-16-amd64   containerd://1.4.11

The PVC is bound:

NAMESPACE          NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
host-namespace-1   data-vcluster-1-0   Bound    pvc-1ae791aa-a22c-4a91-8c2a-59c3826a5d9e   5Gi        RWO            nfs-client     20s

The StorageClass is defined and default:

NAME                   PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client (default)   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   5m36s

There is only 1 Pod running within the host-namespace-1 namespace, should there be more?

 kg po -n host-namespace-1
NAME           READY   STATUS             RESTARTS   AGE
vcluster-1-0   1/2     CrashLoopBackOff   5          3m23s

Logs from the vcluster container:

k logs vcluster-1-0 -c vcluster -n host-namespace-1
time="2021-10-11T17:55:46.996009376Z" level=info msg="Starting k3s v1.20.9+k3s1 (e365bc71)"
time="2021-10-11T17:55:46.998308864Z" level=info msg="Cluster bootstrap already complete"
time="2021-10-11T17:55:47.194104591Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-10-11T17:55:47.194225246Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-10-11T17:55:47.224771631Z" level=info msg="Database tables and indexes are up to date"
time="2021-10-11T17:55:47.280826825Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-10-11T17:55:47.298088612Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/data/server/tls/temporary-certs --client-ca-file=/data/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=ServiceAccountIssuerDiscovery=false --insecure-port=0 --kubelet-certificate-authority=/data/server/tls/server-ca.crt --kubelet-client-certificate=/data/server/tls/client-kube-apiserver.crt --kubelet-client-key=/data/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/data/server/tls/client-auth-proxy.crt --proxy-client-key-file=/data/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/data/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/server/tls/service.key --service-account-signing-key-file=/data/server/tls/service.key --service-cluster-ip-range=10.96.0.0/12 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/data/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/data/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1011 17:55:47.300815       1 server.go:659] external host was not specified, using 192.168.2.67
I1011 17:55:47.306175       1 server.go:196] Version: v1.20.9+k3s1
I1011 17:55:47.317041       1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1011 17:55:47.318819       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1011 17:55:47.318877       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1011 17:55:47.322537       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1011 17:55:47.322595       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1011 17:55:47.364894       1 instance.go:289] Using reconciler: lease
I1011 17:55:47.460354       1 rest.go:131] the default service ipfamily for this cluster is: IPv4
I1011 17:55:47.914177       1 trace.go:205] Trace[1015611991]: "List etcd3" key:/persistentvolumes,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.385) (total time: 528ms):
Trace[1015611991]: [528.661763ms] [528.661763ms] END
W1011 17:55:47.964539       1 genericapiserver.go:425] Skipping API batch/v2alpha1 because it has no resources.
W1011 17:55:47.991467       1 genericapiserver.go:425] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W1011 17:55:48.004228       1 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1011 17:55:48.012402       1 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1011 17:55:48.017313       1 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1011 17:55:48.024214       1 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1011 17:55:48.027034       1 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
I1011 17:55:48.029281       1 trace.go:205] Trace[645338608]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.427) (total time: 601ms):
Trace[645338608]: [601.37129ms] [601.37129ms] END
W1011 17:55:48.033449       1 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W1011 17:55:48.033509       1 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I1011 17:55:48.042505       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1011 17:55:48.042595       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1011 17:55:48.084475       1 trace.go:205] Trace[1181508496]: "List etcd3" key:/serviceaccounts,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.446) (total time: 638ms):
Trace[1181508496]: [638.04976ms] [638.04976ms] END
I1011 17:55:48.085144       1 trace.go:205] Trace[1980747642]: "List etcd3" key:/controllers,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.458) (total time: 626ms):
Trace[1980747642]: [626.540901ms] [626.540901ms] END
I1011 17:55:48.107796       1 trace.go:205] Trace[1347401832]: "List etcd3" key:/ingressclasses,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.584) (total time: 523ms):
Trace[1347401832]: [523.05078ms] [523.05078ms] END
I1011 17:55:48.145946       1 trace.go:205] Trace[1716606333]: "List etcd3" key:/namespaces,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.395) (total time: 750ms):
Trace[1716606333]: [750.653663ms] [750.653663ms] END
I1011 17:55:48.213479       1 trace.go:205] Trace[698428896]: "List etcd3" key:/limitranges,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.377) (total time: 835ms):
Trace[698428896]: [835.978721ms] [835.978721ms] END
time="2021-10-11T17:55:48.215798885Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/data/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/data/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/data/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/data/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/data/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/data/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/data/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/data/server/tls/client-ca.key --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle --kubeconfig=/data/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/data/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/data/server/tls/service.key --use-service-account-credentials=true"
time="2021-10-11T17:55:48.216432240Z" level=info msg="Waiting for API server to become available"
I1011 17:55:48.231007       1 trace.go:205] Trace[997915952]: "List etcd3" key:/persistentvolumeclaims,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.391) (total time: 839ms):
Trace[997915952]: [839.042299ms] [839.042299ms] END
time="2021-10-11T17:55:48.247706876Z" level=info msg="Node token is available at /data/server/token"
I1011 17:55:48.248603       1 trace.go:205] Trace[2117563623]: "List etcd3" key:/poddisruptionbudgets,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.620) (total time: 628ms):
Trace[2117563623]: [628.34578ms] [628.34578ms] END
time="2021-10-11T17:55:48.249033058Z" level=info msg="To join node to cluster: k3s agent -s https://192.168.2.67:6443 -t ${NODE_TOKEN}"
time="2021-10-11T17:55:48.251081596Z" level=info msg="Wrote kubeconfig /k3s-config/kube-config.yaml"
time="2021-10-11T17:55:48.251747044Z" level=info msg="Run: k3s kubectl"
I1011 17:55:48.282388       1 trace.go:205] Trace[1402538094]: "List etcd3" key:/cronjobs,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.549) (total time: 732ms):
Trace[1402538094]: [732.965788ms] [732.965788ms] END
I1011 17:55:48.299406       1 trace.go:205] Trace[1104010957]: "List etcd3" key:/endpointslices,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.574) (total time: 724ms):
Trace[1104010957]: [724.326703ms] [724.326703ms] END
I1011 17:55:48.309062       1 trace.go:205] Trace[381194779]: "List etcd3" key:/ingress,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.574) (total time: 734ms):
Trace[381194779]: [734.357967ms] [734.357967ms] END
I1011 17:55:48.318565       1 trace.go:205] Trace[997685294]: "List etcd3" key:/certificatesigningrequests,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.551) (total time: 767ms):
Trace[997685294]: [767.489308ms] [767.489308ms] END
I1011 17:55:48.324309       1 trace.go:205] Trace[996031407]: "List etcd3" key:/jobs,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.548) (total time: 776ms):
Trace[996031407]: [776.009864ms] [776.009864ms] END
I1011 17:55:48.329067       1 trace.go:205] Trace[1659174298]: "List etcd3" key:/roles,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.675) (total time: 653ms):
Trace[1659174298]: [653.488627ms] [653.488627ms] END
I1011 17:55:48.344642       1 trace.go:205] Trace[2042143742]: "List etcd3" key:/ingressclasses,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.591) (total time: 752ms):
Trace[2042143742]: [752.608032ms] [752.608032ms] END
I1011 17:55:48.357942       1 trace.go:205] Trace[1408686181]: "List etcd3" key:/leases,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.573) (total time: 784ms):
Trace[1408686181]: [784.303255ms] [784.303255ms] END
I1011 17:55:48.406503       1 trace.go:205] Trace[803456065]: "List etcd3" key:/runtimeclasses,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.600) (total time: 805ms):
Trace[803456065]: [805.633145ms] [805.633145ms] END
I1011 17:55:48.407088       1 trace.go:205] Trace[1532094446]: "List etcd3" key:/horizontalpodautoscalers,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.546) (total time: 860ms):
Trace[1532094446]: [860.689004ms] [860.689004ms] END
I1011 17:55:48.408611       1 trace.go:205] Trace[1669396518]: "List etcd3" key:/certificatesigningrequests,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.552) (total time: 855ms):
Trace[1669396518]: [855.986663ms] [855.986663ms] END
I1011 17:55:48.440468       1 trace.go:205] Trace[419419259]: "List etcd3" key:/rolebindings,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.657) (total time: 782ms):
Trace[419419259]: [782.693834ms] [782.693834ms] END
I1011 17:55:48.440934       1 trace.go:205] Trace[374781989]: "List etcd3" key:/clusterroles,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.684) (total time: 756ms):
Trace[374781989]: [756.057136ms] [756.057136ms] END
I1011 17:55:48.441606       1 trace.go:205] Trace[1654234916]: "List etcd3" key:/rolebindings,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.682) (total time: 759ms):
Trace[1654234916]: [759.179262ms] [759.179262ms] END
I1011 17:55:48.441924       1 trace.go:205] Trace[675602123]: "List etcd3" key:/clusterrolebindings,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.686) (total time: 755ms):
Trace[675602123]: [755.265014ms] [755.265014ms] END
I1011 17:55:48.442274       1 trace.go:205] Trace[18353780]: "List etcd3" key:/csinodes,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.710) (total time: 731ms):
Trace[18353780]: [731.432655ms] [731.432655ms] END
I1011 17:55:48.443768       1 trace.go:205] Trace[238382825]: "List etcd3" key:/csinodes,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.744) (total time: 699ms):
Trace[238382825]: [699.304301ms] [699.304301ms] END
I1011 17:55:48.448640       1 trace.go:205] Trace[1240317858]: "List etcd3" key:/roles,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.651) (total time: 796ms):
Trace[1240317858]: [796.894614ms] [796.894614ms] END
I1011 17:55:48.448956       1 trace.go:205] Trace[1472278885]: "List etcd3" key:/csidrivers,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.753) (total time: 695ms):
Trace[1472278885]: [695.596109ms] [695.596109ms] END
I1011 17:55:48.449248       1 trace.go:205] Trace[1771885004]: "List etcd3" key:/priorityclasses,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.688) (total time: 761ms):
Trace[1771885004]: [761.209342ms] [761.209342ms] END
I1011 17:55:48.458434       1 trace.go:205] Trace[1676703816]: "List etcd3" key:/volumeattachments,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.698) (total time: 760ms):
Trace[1676703816]: [760.205351ms] [760.205351ms] END
I1011 17:55:48.478448       1 trace.go:205] Trace[164159887]: "List etcd3" key:/mutatingwebhookconfigurations,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.822) (total time: 656ms):
Trace[164159887]: [656.350958ms] [656.350958ms] END
I1011 17:55:48.478580       1 trace.go:205] Trace[493186083]: "List etcd3" key:/ingress,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.589) (total time: 889ms):
Trace[493186083]: [889.403069ms] [889.403069ms] END
I1011 17:55:48.514798       1 trace.go:205] Trace[667458553]: "List etcd3" key:/leases,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.554) (total time: 960ms):
Trace[667458553]: [960.071179ms] [960.071179ms] END
I1011 17:55:48.525595       1 trace.go:205] Trace[776690714]: "List etcd3" key:/clusterroles,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.661) (total time: 864ms):
Trace[776690714]: [864.429961ms] [864.429961ms] END
I1011 17:55:48.538297       1 trace.go:205] Trace[863489009]: "List etcd3" key:/storageclasses,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.739) (total time: 799ms):
Trace[863489009]: [799.016114ms] [799.016114ms] END
I1011 17:55:48.556717       1 trace.go:205] Trace[1380906257]: "List etcd3" key:/statefulsets,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.786) (total time: 769ms):
Trace[1380906257]: [769.930887ms] [769.930887ms] END
I1011 17:55:48.576931       1 trace.go:205] Trace[1913168004]: "List etcd3" key:/volumeattachments,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.742) (total time: 834ms):
Trace[1913168004]: [834.69007ms] [834.69007ms] END
I1011 17:55:48.601484       1 trace.go:205] Trace[357813636]: "List etcd3" key:/storageclasses,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (11-Oct-2021 17:55:47.691) (total time: 910ms):
Trace[357813636]: [910.175711ms] [910.175711ms] END
I1011 17:55:50.080586       1 dynamic_cafile_content.go:167] Starting request-header::/data/server/tls/request-header-ca.crt
I1011 17:55:50.080679       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/data/server/tls/client-ca.crt
I1011 17:55:50.082485       1 secure_serving.go:197] Serving securely on 127.0.0.1:6444
I1011 17:55:50.082930       1 dynamic_serving_content.go:130] Starting serving-cert::/data/server/tls/serving-kube-apiserver.crt::/data/server/tls/serving-kube-apiserver.key
I1011 17:55:50.082947       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1011 17:55:50.083177       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/data/server/tls/client-auth-proxy.crt::/data/server/tls/client-auth-proxy.key
I1011 17:55:50.083770       1 available_controller.go:475] Starting AvailableConditionController
I1011 17:55:50.083776       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1011 17:55:50.083787       1 controller.go:83] Starting OpenAPI AggregationController
I1011 17:55:50.084070       1 apf_controller.go:261] Starting API Priority and Fairness config controller
I1011 17:55:50.084094       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I1011 17:55:50.084143       1 autoregister_controller.go:141] Starting autoregister controller
I1011 17:55:50.084150       1 cache.go:32] Waiting for caches to sync for autoregister controller
I1011 17:55:50.090978       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1011 17:55:50.091029       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1011 17:55:50.092417       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1011 17:55:50.092423       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1011 17:55:50.097903       1 controller.go:86] Starting OpenAPI controller
I1011 17:55:50.097939       1 naming_controller.go:291] Starting NamingConditionController
I1011 17:55:50.097949       1 establishing_controller.go:76] Starting EstablishingController
I1011 17:55:50.098042       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1011 17:55:50.098056       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1011 17:55:50.098081       1 crd_finalizer.go:266] Starting CRDFinalizer
I1011 17:55:50.098112       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1011 17:55:50.098115       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1011 17:55:50.100566       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/data/server/tls/client-ca.crt
I1011 17:55:50.100755       1 dynamic_cafile_content.go:167] Starting request-header::/data/server/tls/request-header-ca.crt
E1011 17:55:50.183154       1 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.96.0.1"}: failed to allocated ip:10.96.0.1 with error:cannot allocate resources of type serviceipallocations at this time
I1011 17:55:50.184341       1 cache.go:39] Caches are synced for autoregister controller
I1011 17:55:50.184575       1 apf_controller.go:266] Running API Priority and Fairness config worker
I1011 17:55:50.184656       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1011 17:55:50.191584       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1011 17:55:50.192698       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I1011 17:55:50.198320       1 shared_informer.go:247] Caches are synced for crd-autoregister 
E1011 17:55:50.217018       1 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.2.67, ResourceVersion: 0, AdditionalErrorMsg: 
I1011 17:55:50.217113       1 shared_informer.go:247] Caches are synced for node_authorizer 
I1011 17:55:51.080269       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1011 17:55:51.080360       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1011 17:56:18.014748       1 trace.go:205] Trace[572805147]: "Create" url:/apis/apiregistration.k8s.io/v1/apiservices,user-agent:k3s/v1.20.9+k3s1 (linux/amd64) kubernetes/e365bc7,client:127.0.0.1 (11-Oct-2021 17:55:50.186) (total time: 27828ms):
Trace[572805147]: ---"Object stored in database" 27828ms (17:56:00.014)
Trace[572805147]: [27.828632858s] [27.828632858s] END
I1011 17:56:18.061024       1 trace.go:205] Trace[706281812]: "GuaranteedUpdate etcd3" type:*flowcontrol.FlowSchema (11-Oct-2021 17:55:50.186) (total time: 27874ms):
Trace[706281812]: ---"Transaction committed" 27873ms (17:56:00.060)
Trace[706281812]: [27.87436178s] [27.87436178s] END
I1011 17:56:18.061171       1 trace.go:205] Trace[1939991012]: "Patch" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/probes/status,user-agent:k3s/v1.20.9+k3s1 (linux/amd64) kubernetes/e365bc7,client:127.0.0.1 (11-Oct-2021 17:55:50.185) (total time: 27875ms):
Trace[1939991012]: ---"Object stored in database" 27873ms (17:56:00.061)
Trace[1939991012]: [27.875218488s] [27.875218488s] END
time="2021-10-11T17:56:22.162929591Z" level=info msg="Waiting for API server to become available"
I1011 17:56:24.186370       1 trace.go:205] Trace[780291534]: "Create" url:/apis/apiregistration.k8s.io/v1/apiservices,user-agent:k3s/v1.20.9+k3s1 (linux/amd64) kubernetes/e365bc7,client:127.0.0.1 (11-Oct-2021 17:55:50.185) (total time: 34000ms):
Trace[780291534]: [34.000539251s] [34.000539251s] END
E1011 17:56:24.186387       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
I1011 17:56:24.186550       1 trace.go:205] Trace[1063228152]: "Create" url:/apis/apiregistration.k8s.io/v1/apiservices,user-agent:k3s/v1.20.9+k3s1 (linux/amd64) kubernetes/e365bc7,client:127.0.0.1 (11-Oct-2021 17:55:50.186) (total time: 34000ms):
Trace[1063228152]: [34.000395711s] [34.000395711s] END
I1011 17:56:24.186597       1 trace.go:205] Trace[1709045540]: "Create" url:/apis/apiregistration.k8s.io/v1/apiservices,user-agent:k3s/v1.20.9+k3s1 (linux/amd64) kubernetes/e365bc7,client:127.0.0.1 (11-Oct-2021 17:55:50.186) (total time: 34000ms):
Trace[1709045540]: [34.000391771s] [34.000391771s] END
E1011 17:56:24.186599       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}
E1011 17:56:24.186752       1 autoregister_controller.go:195] v1.apps failed with : Timeout: request did not complete within requested timeout 34s
E1011 17:56:24.186752       1 autoregister_controller.go:195] v1beta1.authentication.k8s.io failed with : context deadline exceeded
I1011 17:56:24.187666       1 trace.go:205] Trace[1452171808]: "Create" url:/apis/apiregistration.k8s.io/v1/apiservices,user-agent:k3s/v1.20.9+k3s1 (linux/amd64) kubernetes/e365bc7,client:127.0.0.1 (11-Oct-2021 17:55:50.185) (total time: 34001ms):
Trace[1452171808]: [34.001815443s] [34.001815443s] END
E1011 17:56:24.188326       1 autoregister_controller.go:195] v1beta1.admissionregistration.k8s.io failed with : context deadline exceeded
E1011 17:56:24.189117       1 autoregister_controller.go:195] v1.admissionregistration.k8s.io failed with : Timeout: request did not complete within requested timeout 34s
I1011 17:56:24.253234       1 trace.go:205] Trace[682273448]: "Create" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas,user-agent:k3s/v1.20.9+k3s1 (linux/amd64) kubernetes/e365bc7,client:127.0.0.1 (11-Oct-2021 17:55:50.250) (total time: 34002ms):
Trace[682273448]: [34.002599069s] [34.002599069s] END
E1011 17:56:24.253891       1 storage_flowcontrol.go:119] failed ensuring suggested settings, will retry later: cannot create suggested FlowSchema workload-leader-election due to Timeout: request did not complete within requested timeout 34s
FabianKramm commented 3 years ago

@anasalloush thanks for the info! Seems like some requests are exceptionally slow in your setup, does increasing the probe timeouts help?

sam-sre commented 3 years ago

@FabianKramm Thanks for following. I increased the timers within the StatefulSet as below, but the Pod is still in CrashLoopBackOff

apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    meta.helm.sh/release-name: vcluster-1
    meta.helm.sh/release-namespace: host-namespace-1
  creationTimestamp: "2021-10-12T10:26:59Z"
  generation: 1
  labels:
    app: vcluster
    app.kubernetes.io/managed-by: Helm
    chart: vcluster-0.4.1
    heritage: Helm
    release: vcluster-1
  name: vcluster-1
  namespace: host-namespace-1
  resourceVersion: "2971"
  uid: ac9f8639-6fe4-4ad4-bfc9-33e1abd588ba
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: vcluster
      release: vcluster-1
  serviceName: vcluster-1-headless
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: vcluster
        release: vcluster-1
    spec:
      affinity: {}
      containers:
      - args:
        - server
        - --write-kubeconfig=/k3s-config/kube-config.yaml
        - --data-dir=/data
        - --disable=traefik,servicelb,metrics-server,local-storage
        - --disable-network-policy
        - --disable-agent
        - --disable-scheduler
        - --disable-cloud-controller
        - --flannel-backend=none
        - --kube-controller-manager-arg=controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle
        - --service-cidr=10.96.0.0/12
        command:
        - /bin/k3s
        image: rancher/k3s:v1.20.9-k3s1
        imagePullPolicy: IfNotPresent
        name: vcluster
        resources:
          limits:
            memory: 2Gi
          requests:
            cpu: 200m
            memory: 256Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data
          name: data
      - args:
        - --service-name=vcluster-1
        - --suffix=vcluster-1
        - --owning-statefulset=vcluster-1
        - --out-kube-config-secret=vc-vcluster-1
        image: loftsh/vcluster:0.4.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 100
          httpGet:
            path: /healthz
            port: 8443
            scheme: HTTPS
          initialDelaySeconds: 200
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: syncer
        readinessProbe:
          failureThreshold: 100
          httpGet:
            path: /readyz
            port: 8443
            scheme: HTTPS
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        resources:
          limits:
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 128Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data
          name: data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: vc-vcluster-1
      serviceAccountName: vc-vcluster-1
      terminationGracePeriodSeconds: 10
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
  volumeClaimTemplates:
  - apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      creationTimestamp: null
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      volumeMode: Filesystem

Here are the logs from the syncer container.

k logs vcluster-1-0 -c syncer -n host-namespace-1
I1012 10:35:28.699793       1 main.go:234] Using physical cluster at https://10.96.0.1:443
I1012 10:35:28.862074       1 main.go:265] Can connect to virtual cluster with version v1.20.9+k3s1
**F1012 10:35:28.878488       1 main.go:143] register controllers: register ingresses indices: no matches for kind "Ingress" in version "networking.k8s.io/v1"**

The Failure here doesing make sense since K8s v1.20 supports Ingress version"networking.k8s.io/v1"

BR Anas

wizpresso-steve-cy-fan commented 1 year ago

I have another problem instead. I did set a default SC in host cluster, and I created a new PVC in virtual cluster. I see that the PV for the PVC in the virtual cluster is created in the host and although it is bound in host, it is however not synchronized in the virtual cluster. I'm using Ceph in the host cluster so I'm not sure if that's the problem. My workaround right now is to create a new SC inside the virtual cluster with the identical config from the host SC, and it turned out working alright this way. I'm not sure if I should just create a default empty provisioner instead of doing this I think

dvaldivia commented 1 year ago

what @wizpresso-steve-cy-fan points out works, seems like the storageClasses from the host cluster are not getting synced, if it's not expected and we should re-create the host SC inside the vcluster, perhaps this should be documented

matskiv commented 1 year ago

@wizpresso-steve-cy-fan Sorry for the late reply, I didn't notice these comments earlier. In your case, the PVC status was not synced to the virtual cluster correctly, or did you mean something else?

By default, the StorageClasses are not synced between virtual and host, and the PVC will be able to use only the SC defined in the host cluster. If the virtual cluster defines a default SC then it takes precedence over the default in the host, but if it is not synced(again - default behavior) the host cluster must have an SC with the same name. Then there are two options to enable SC sync: virtual to host - Values.sync.storageclasses.enabled=true or host to virtual - Values.sync.hoststorageclasses.enabled=true. Relevant docs page - https://www.vcluster.com/docs/architecture/synced-resources. @dvaldivia yes, I agree, this should be documented