rancher / k3os

Purpose-built OS for Kubernetes, fully managed by Kubernetes.
https://k3os.io
Apache License 2.0
3.5k stars 403 forks source link

Unable to install k3os within an airgap environment #431

Closed mak3r closed 4 years ago

mak3r commented 4 years ago

Version (k3OS / kernel)

$ k3os --version
k3os version v0.9.1
$ uname --kernel-release --kernel-version
4.19.108-v8+ #1298  SMP PREEMPT Fri Mar 6 18:15:51 GMT 2020

Architecture

$ uname --machine
aarch64

Describe the bug I am trying to setup a k3os device with no external networking available (air-gap) except what is available by configuring the onboard ethernet adapter.

Upon reboot, and a little time it appears that k3s has started

kubectl get node
NAME         STATUS   ROLES    AGE     VERSION
mak3r-k3os   Ready    master   4h50m   v1.17.2+k3s1

But, containers remain in ContainerCreating state indefinitely

To Reproduce

  1. Lay down the k3os rootfs

    • sudo /bin/sh -c ' curl -sfL https://github.com/rancher/k3os/releases/download/v0.9.1/k3os-rootfs-arm.tar.gz | tar zxvf - --strip-components=1 -C /'
  2. Add k3s air-gap images

    • sudo mkdir -p /k3os/data/var/lib/rancher/k3s/server/images
    • sudo /bin/sh -c 'curl -sfL https://github.com/rancher/k3s/releases/download/v1.17.2%2Bk3s1/k3s-airgap-images-arm64.tar | tar -xvf - -C /k3os/data/var/lib/rancher/k3s/server/images'
  3. Create config.yaml in /k3os/system/config.yaml

    ssh_authorized_keys:
    - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHVZtKeG7C1CmMYh07Rgm6JkbIZ9...
    write_files:
    - encoding: ""
      content: |-
        #!/bin/bash
        echo hi
      owner: root
      path: /etc/rc.local
      permissions: '0755'
    - content: |-
        auto eth0
        iface eth0 inet static
          address 10.0.0.1
          netmask 255.255.255.0
          network 10.0.0.0
          broadcast 10.0.0.255
          gateway 10.0.0.1
          dns-nameservers 10.0.0.1 
    
        auto wlan0
        iface wlan0 inet static
          address 192.168.8.33
          owner: root
          netmask 255.255.255.0
          network 192.168.8.0
          broadcast 192.168.8.255
          gateway 192.168.86.1
          dns-nameservers 192.168.86.1 8.8.8.8 1.1.1.1 
      path: /etc/network/interfaces
      permissions: '0644'
    - content: |-
        connmanctl enable wifi
        SSID="MY-SSID"
        connmanctl scan wifi
        svc=$(connmanctl services | grep $SSID | awk '{print $2}')
        connmanctl agent on
        connmanctl connect "$svc"
      owner: root
      path: /usr/local/bin/start-wifi.sh
      permissions: '0644'
    hostname: mak3r-k3os
    run_cmd:
    - "ifdown eth0 && ifup eth0"
    boot_cmd:
    - "ln -sf /etc/init.d/swclock /etc/runlevels/boot/swclock"
    init_cmd:
    - "echo not starting:/usr/local/bin/start-wifi.sh"
    
    k3os:
      dns_nameservers:
      - 10.0.0.1
      wifi:
      - name: none
        password: abc123
      password: rancher
      k3s_args:
        - server
        - "--advertise-address=10.0.0.1"
        environment:
          INSTALL_K3S_SKIP_DOWNLOAD: true
  4. Finalize the overlay

    sync
    reboot -f

Expected behavior k3s is running and new resources can be deployed without connecting to the network.

Actual behavior k3s pre-defined containers get stuck in ContainerCreating state

$ kubectl get pods -A

NAMESPACE     NAME                                         READY   STATUS              RESTARTS   AGE
kube-system   coredns-d798c9dd-2jxd6                       0/1     ContainerCreating   0          4h44m
kube-system   local-path-provisioner-58fb86bdfd-kdv7n      0/1     ContainerCreating   0          4h44m
kube-system   metrics-server-6d684c7b5-snlv9               0/1     ContainerCreating   0          4h44m
k3os-system   system-upgrade-controller-84b4b86fd7-rbl9j   0/1     ContainerCreating   0          4h44m
kube-system   helm-install-traefik-m7lrd                   0/1     ContainerCreating   0          4h44m

Example of coredns pod state:

Name:           coredns-d798c9dd-2jxd6
Namespace:      kube-system
Priority:       0
Node:           mak3r-k3os/192.168.8.228
Start Time:     Fri, 17 Jan 2020 08:10:20 +0000
Labels:         k8s-app=kube-dns
                pod-template-hash=d798c9dd
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/coredns-d798c9dd
Containers:
  coredns:
    Container ID:  
    Image:         coredns/coredns:1.6.3
    Image ID:      
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=10s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-l878j (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-l878j:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-l878j
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                    From                 Message
  ----     ------                  ----                   ----                 -------
  Warning  FailedCreatePodSandBox  92s (x969 over 4h44m)  kubelet, mak3r-k3os  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to copy: httpReaderSeeker: failed open: failed to do request: Get https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/e1/e11a8cbeda8688fdc3a68dc246d6c945aa272b29f8dd94d0ea370d30c2674042/data?verify=1586555507-kcN05vYgCkmlX7YtaCwtE7dLLdQ%3D: x509: certificate has expired or is not yet valid

Additional context

*The pod event above suggests that the pause container is not on the device." When I verify the content of the images directory, the repositories.json and the manifest indicate inclusion of the pause container and it appears correctly in the images directory.

crictl sees no images available

$ sudo crictl images
IMAGE               TAG                 IMAGE ID            SIZE

I decided to try and install the pause container manually so I exported it from a running k3s installation ctr image export pause.tar docker.io/rancher/pause:3.1 opened up ssh and copied the tar file over and then tried manually adding it sudo ctr -n=k8s.io images import pause.tar

That approach fails due to a manifest issue where only amd64 is listed, the image cannot be loaded - https://github.com/rancher/k3s/issues/1094

$ sudo crictl inspecti rancher/pause:3.1 | grep architecture
      "architecture": "amd64",
dweomer commented 4 years ago

@mak3r use the arm64 rootfs and not the arm rootfs as you have listed. this is why k3s is attempting to go out to the network to pull the needed images (because the k3s runtime is arm).


P.S. that error message about a bad certificate is because your RPI4 doesn't have a hardware clock and pool.ntp.org likely doesn't resolve to a time server on the network it is local to. RPIs on an air-gapped network need a timeserver to get their time from on boot, otherwise what you will see is the time that was recorded the last time they were booted up. this means that, with a cluster of RPIs, rebooting a node can cause it to fail to connect to the other nodes because all of it's certs and it's peers certs will likely be invalid because they are in the future as far as it knows.

mak3r commented 4 years ago

Yes, I see I made a mistake and was installing the incorrect version of k3os. Ack!

Since you identified this, I wiped everything and started over however, I am still getting the error regarding attempts to download the pause container.

My process is such that I move the k3os archive, the air-gap images and the config.yaml over to the host and then run a script like this:

#!/bin/sh 

tar zxvf k3os-rootfs-arm64.tar.gz --strip-components=1 -C /
mkdir -p /k3os/data/var/lib/rancher/k3s/server/images
tar -xvf k3s-airgap-images-arm64.tar -C /k3os/data/var/lib/rancher/k3s/server/images
mkdir -p /k3os/data/var/lib/rancher/k3s/agent/images
tar -xvf k3s-airgap-images-arm64.tar -C /k3os/data/var/lib/rancher/k3s/agent/images
cp config.yaml /k3os/system/config.yaml
sync
reboot -f

I think this is trivial but worth mentioning just in case. You can see that I extracted the air-gap images to both the k3s server and k3s agent location. I believe they should only be needed in the agent but at some point I had an issue that was resolved by installing them into the server location. This was prior to you identifying that I had the wrong architecture for k3os.

So, on reboot k3s is started and I can run kubectl commands but the default k3s containers never get past the ContainerCreating state as before.

Since I made the mistake with the k3os architecture version, I decided to verify the k3s file type and architecture. Since the file command is not in k3os, I copied the binary over to my workstation.

$ file k3s
k3s: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, stripped

Regarding the time server, this is definitely something to keep in mind for those exploring k3os on raspberrypi with multiple nodes. In my case, it is always going to be a single node which needs additional configuration to get the right network setup after the first boot. I am working on building a turnkey container which a local user will access from a web browser to finalize the system configuration - including possibly an external network with time server.

Below are the k3s-service.log, containerd.log and config.yaml

k3s-service.log

time="2020-01-17T08:09:15Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/81d6cc7e694228aa5c7660807dc235d0023eba04b407935a202018e9b905f655"
time="2020-01-17T08:09:25.703859106Z" level=info msg="Starting k3s v1.17.2+k3s1 (cdab19b0)"
time="2020-01-17T08:09:25.741664625Z" level=info msg="Kine listening on unix://kine.sock"
time="2020-01-17T08:09:26.941568365Z" level=info msg="Active TLS secret  (ver=) (count 7): map[listener.cattle.io/cn-10.0.0.1:10.0.0.1 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:ffe29f1b021dff66934669f178d522ced65ed564f8c65e3d20bab9e3f4217d43]"
time="2020-01-17T08:09:27.038190031Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
I0117 08:09:27.041042    1798 server.go:622] external host was not specified, using 10.0.0.1
I0117 08:09:27.042562    1798 server.go:163] Version: v1.17.2+k3s1
I0117 08:09:32.766236    1798 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0117 08:09:32.766405    1798 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0117 08:09:32.773098    1798 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0117 08:09:32.773246    1798 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0117 08:09:32.920898    1798 master.go:267] Using reconciler: lease
I0117 08:09:33.122193    1798 rest.go:115] the default service ipfamily for this cluster is: IPv4
W0117 08:09:35.323746    1798 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W0117 08:09:35.392227    1798 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0117 08:09:35.461958    1798 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0117 08:09:35.598651    1798 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0117 08:09:35.625473    1798 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0117 08:09:35.728155    1798 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0117 08:09:35.879776    1798 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0117 08:09:35.879937    1798 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0117 08:09:35.954592    1798 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0117 08:09:35.954759    1798 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0117 08:09:49.632285    1798 dynamic_cafile_content.go:166] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0117 08:09:49.632333    1798 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0117 08:09:49.634203    1798 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I0117 08:09:49.637116    1798 tlsconfig.go:219] Starting DynamicServingCertificateController
I0117 08:09:49.637030    1798 secure_serving.go:178] Serving securely on 127.0.0.1:6444
2020-01-17 08:09:49.647386 I | http: TLS handshake error from 127.0.0.1:54700: EOF
I0117 08:09:49.650004    1798 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0117 08:09:49.650180    1798 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0117 08:09:49.834488    1798 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0117 08:09:49.834629    1798 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I0117 08:09:49.834832    1798 available_controller.go:386] Starting AvailableConditionController
I0117 08:09:49.834887    1798 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0117 08:09:49.834945    1798 cache.go:39] Caches are synced for AvailableConditionController controller
I0117 08:09:49.835913    1798 controller.go:81] Starting OpenAPI AggregationController
I0117 08:09:49.836088    1798 autoregister_controller.go:140] Starting autoregister controller
I0117 08:09:49.836140    1798 cache.go:32] Waiting for caches to sync for autoregister controller
I0117 08:09:49.836192    1798 cache.go:39] Caches are synced for autoregister controller
I0117 08:09:49.837897    1798 crd_finalizer.go:263] Starting CRDFinalizer
I0117 08:09:49.840122    1798 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0117 08:09:49.840382    1798 dynamic_cafile_content.go:166] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0117 08:09:49.842015    1798 crdregistration_controller.go:111] Starting crd-autoregister controller
I0117 08:09:49.842161    1798 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0117 08:09:49.846478    1798 controller.go:85] Starting OpenAPI controller
I0117 08:09:49.846831    1798 customresource_discovery_controller.go:208] Starting DiscoveryController
I0117 08:09:49.846957    1798 naming_controller.go:288] Starting NamingConditionController
I0117 08:09:49.847095    1798 establishing_controller.go:73] Starting EstablishingController
I0117 08:09:49.847215    1798 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0117 08:09:49.847333    1798 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0117 08:09:49.861839    1798 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0117 08:09:49.956296    1798 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I0117 08:09:49.956612    1798 shared_informer.go:204] Caches are synced for crd-autoregister 
E0117 08:09:50.325976    1798 controller.go:150] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time
E0117 08:09:50.351171    1798 controller.go:155] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/10.0.0.1, ResourceVersion: 0, AdditionalErrorMsg: 
I0117 08:09:50.632142    1798 controller.go:107] OpenAPI AggregationController: Processing item 
I0117 08:09:50.632423    1798 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0117 08:09:50.632550    1798 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0117 08:09:50.657431    1798 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0117 08:09:50.674907    1798 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0117 08:09:50.675067    1798 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0117 08:09:52.381538    1798 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0117 08:09:52.570978    1798 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0117 08:09:52.878259    1798 lease.go:224] Resetting endpoints for master service "kubernetes" to [10.0.0.1]
I0117 08:09:52.881940    1798 controller.go:606] quota admission added evaluator for: endpoints
time="2020-01-17T08:09:53.025620664Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"
time="2020-01-17T08:09:53.026288108Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2020-01-17T08:09:53.070384923Z" level=info msg="Creating CRD addons.k3s.cattle.io"
I0117 08:09:53.075057    1798 controllermanager.go:161] Version: v1.17.2+k3s1
I0117 08:09:53.077281    1798 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
time="2020-01-17T08:09:53.077545293Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
time="2020-01-17T08:09:53.115879275Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
W0117 08:09:53.132821    1798 authorization.go:47] Authorization is disabled
W0117 08:09:53.134785    1798 authentication.go:92] Authentication is disabled
I0117 08:09:53.134927    1798 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
time="2020-01-17T08:09:53.243830608Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
time="2020-01-17T08:09:53.768527941Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
time="2020-01-17T08:09:53.947841886Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
I0117 08:09:53.955556    1798 trace.go:116] Trace[1781827927]: "GuaranteedUpdate etcd3" type:*apiextensions.CustomResourceDefinition (started: 2020-01-17 08:09:53.260518553 +0000 UTC m=+28.846484003) (total time: 694.882944ms):
Trace[1781827927]: [694.481759ms] [693.22163ms] Transaction committed
I0117 08:09:53.956274    1798 trace.go:116] Trace[14715946]: "Update" url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/helmcharts.helm.cattle.io/status,user-agent:k3s/v1.17.2+k3s1 (linux/arm64) kubernetes/cdab19b,client:127.0.0.1 (started: 2020-01-17 08:09:53.259609071 +0000 UTC m=+28.845574558) (total time: 696.502556ms):
Trace[14715946]: [696.1995ms] [695.477833ms] Object stored in database
time="2020-01-17T08:09:54.105598182Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
time="2020-01-17T08:09:54.460791941Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
time="2020-01-17T08:09:54.521716978Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz"
time="2020-01-17T08:09:54.523385644Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
time="2020-01-17T08:09:54.523864533Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
time="2020-01-17T08:09:54.524369626Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
time="2020-01-17T08:09:54.524856459Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
time="2020-01-17T08:09:54.525271274Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
time="2020-01-17T08:09:54.525694904Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
time="2020-01-17T08:09:54.526416978Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
time="2020-01-17T08:09:54.527147830Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2020-01-17T08:09:54.527736459Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
time="2020-01-17T08:09:54.528172163Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
time="2020-01-17T08:09:54.528588848Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
time="2020-01-17T08:09:54.528998756Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
time="2020-01-17T08:09:54.732643644Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2020-01-17T08:09:54.732990144Z" level=info msg="To join node to cluster: k3s agent -s https://10.0.0.1:6443 -t ${NODE_TOKEN}"
time="2020-01-17T08:09:54.734248293Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2020-01-17T08:09:54.739992052Z" level=info msg="Waiting for master node  startup: resource name may not be empty"
2020-01-17 08:09:55.003763 I | http: TLS handshake error from 127.0.0.1:35050: remote error: tls: bad certificate
time="2020-01-17T08:09:55.111317552Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
time="2020-01-17T08:09:55.112332996Z" level=info msg="Starting batch/v1, Kind=Job controller"
time="2020-01-17T08:09:55.186956329Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
time="2020-01-17T08:09:55.233316089Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
time="2020-01-17T08:09:55.245144607Z" level=info msg="Run: k3s kubectl"
time="2020-01-17T08:09:55.245988329Z" level=info msg="k3s is up and running"
time="2020-01-17T08:09:55.280358200Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2020-01-17T08:09:55.281186163Z" level=info msg="Starting /v1, Kind=Endpoints controller"
time="2020-01-17T08:09:55.282649366Z" level=info msg="Starting /v1, Kind=Secret controller"
time="2020-01-17T08:09:55.284055681Z" level=info msg="Starting /v1, Kind=Node controller"
time="2020-01-17T08:09:55.285186663Z" level=info msg="Starting /v1, Kind=Service controller"
I0117 08:09:55.377282    1798 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
time="2020-01-17T08:09:55.418021625Z" level=info msg="Active TLS secret k3s-serving (ver=157) (count 7): map[listener.cattle.io/cn-10.0.0.1:10.0.0.1 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:ffe29f1b021dff66934669f178d522ced65ed564f8c65e3d20bab9e3f4217d43]"
I0117 08:09:55.464579    1798 plugins.go:100] No cloud provider specified.
I0117 08:09:55.535168    1798 shared_informer.go:197] Waiting for caches to sync for tokens
I0117 08:09:55.605982    1798 controller.go:606] quota admission added evaluator for: serviceaccounts
I0117 08:09:55.639929    1798 shared_informer.go:204] Caches are synced for tokens 
I0117 08:09:55.655763    1798 controllermanager.go:533] Started "daemonset"
I0117 08:09:55.656699    1798 daemon_controller.go:255] Starting daemon sets controller
I0117 08:09:55.656788    1798 shared_informer.go:197] Waiting for caches to sync for daemon sets
2020-01-17 08:09:55.676695 I | http: TLS handshake error from 127.0.0.1:35058: remote error: tls: bad certificate
time="2020-01-17T08:09:55.750487662Z" level=info msg="Waiting for master node  startup: resource name may not be empty"
I0117 08:09:55.766546    1798 controllermanager.go:533] Started "ttl"
W0117 08:09:55.766735    1798 controllermanager.go:525] Skipping "ttl-after-finished"
I0117 08:09:55.767452    1798 ttl_controller.go:116] Starting TTL controller
I0117 08:09:55.767530    1798 shared_informer.go:197] Waiting for caches to sync for TTL
2020-01-17 08:09:55.783423 I | http: TLS handshake error from 127.0.0.1:35064: remote error: tls: bad certificate
I0117 08:09:56.080273    1798 controller.go:606] quota admission added evaluator for: deployments.apps
I0117 08:09:56.102271    1798 controllermanager.go:533] Started "namespace"
I0117 08:09:56.102402    1798 namespace_controller.go:200] Starting namespace controller
I0117 08:09:56.102742    1798 shared_informer.go:197] Waiting for caches to sync for namespace
time="2020-01-17T08:09:56.479458940Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2020-01-17T08:09:56.479854180Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2020-01-17T08:09:56.503171051Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
I0117 08:09:56.770571    1798 trace.go:116] Trace[1089225367]: "GuaranteedUpdate etcd3" type:*core.ServiceAccount (started: 2020-01-17 08:09:56.026236977 +0000 UTC m=+31.612202409) (total time: 744.087055ms):
Trace[1089225367]: [743.780647ms] [742.801944ms] Transaction committed
I0117 08:09:56.771384    1798 trace.go:116] Trace[1791600201]: "Update" url:/api/v1/namespaces/kube-system/serviceaccounts/namespace-controller,user-agent:k3s/v1.17.2+k3s1 (linux/arm64) kubernetes/cdab19b/tokens-controller,client:127.0.0.1 (started: 2020-01-17 08:09:56.025607662 +0000 UTC m=+31.611573149) (total time: 745.602926ms):
Trace[1791600201]: [745.254778ms] [744.941796ms] Object stored in database
I0117 08:09:56.778150    1798 trace.go:116] Trace[1177637014]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector,user-agent:k3s/v1.17.2+k3s1 (linux/arm64) kubernetes/cdab19b/kube-controller-manager,client:127.0.0.1 (started: 2020-01-17 08:09:56.110355292 +0000 UTC m=+31.696320723) (total time: 667.357537ms):
Trace[1177637014]: [667.357537ms] [667.303185ms] END
time="2020-01-17T08:09:56.807283180Z" level=info msg="Waiting for master node mak3r-k3os startup: nodes \"mak3r-k3os\" not found"
I0117 08:09:56.814762    1798 trace.go:116] Trace[1777549981]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cloud-controller-manager,user-agent:k3s/v1.17.2+k3s1 (linux/arm64) kubernetes/cdab19b,client:127.0.0.1 (started: 2020-01-17 08:09:56.207043736 +0000 UTC m=+31.793009186) (total time: 607.534796ms):
Trace[1777549981]: [606.977241ms] [606.936333ms] About to write a response
time="2020-01-17T08:09:56.822937051Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m --secure-port=0"
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
I0117 08:09:56.836946    1798 trace.go:116] Trace[1714682216]: "Create" url:/apis/apps/v1/namespaces/kube-system/deployments,user-agent:k3s/v1.17.2+k3s1 (linux/arm64) kubernetes/cdab19b,client:127.0.0.1 (started: 2020-01-17 08:09:56.038488088 +0000 UTC m=+31.624453575) (total time: 798.236426ms):
Trace[1714682216]: [797.42187ms] [757.452296ms] Object stored in database
I0117 08:09:56.940302    1798 controllermanager.go:120] Version: v1.17.2+k3s1
W0117 08:09:56.940502    1798 controllermanager.go:132] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
I0117 08:09:56.984033    1798 node_controller.go:110] Sending events to api server.
I0117 08:09:56.984481    1798 controllermanager.go:247] Started "cloud-node"
I0117 08:09:57.009010    1798 node_lifecycle_controller.go:77] Sending events to api server
I0117 08:09:57.009435    1798 controllermanager.go:247] Started "cloud-node-lifecycle"
E0117 08:09:57.039740    1798 core.go:90] Failed to start service controller: the cloud provider does not support external load balancers
W0117 08:09:57.053555    1798 controllermanager.go:244] Skipping "service"
W0117 08:09:57.053730    1798 core.go:108] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes.
W0117 08:09:57.053786    1798 controllermanager.go:244] Skipping "route"
time="2020-01-17T08:09:57.505287958Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
I0117 08:09:57.744423    1798 garbagecollector.go:129] Starting garbage collector controller
I0117 08:09:57.755890    1798 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0117 08:09:57.746424    1798 controllermanager.go:533] Started "garbagecollector"
I0117 08:09:57.770344    1798 graph_builder.go:282] GraphBuilder running
time="2020-01-17T08:09:57.836344883Z" level=info msg="Waiting for master node mak3r-k3os startup: nodes \"mak3r-k3os\" not found"
I0117 08:09:57.895172    1798 controllermanager.go:533] Started "csrsigning"
I0117 08:09:57.896016    1798 certificate_controller.go:118] Starting certificate controller "csrsigning"
I0117 08:09:57.896218    1798 shared_informer.go:197] Waiting for caches to sync for certificate-csrsigning
I0117 08:09:58.002218    1798 node_ipam_controller.go:94] Sending events to api server.
time="2020-01-17T08:09:58.507496809Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""
time="2020-01-17T08:09:58.857916123Z" level=info msg="Waiting for master node mak3r-k3os startup: nodes \"mak3r-k3os\" not found"
I0117 08:09:59.098478    1798 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io
time="2020-01-17T08:09:59.131138734Z" level=error msg="failed to process config: failed to process /var/lib/rancher/k3s/server/manifests/system-upgrade-plans/k3os-latest.yaml: the server could not find the requested resource"
I0117 08:09:59.225040    1798 controller.go:606] quota admission added evaluator for: jobs.batch
time="2020-01-17T08:09:59.554307179Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/1cdb7e2bd5e25744fb1c898242b764f88bd9f017726d6f17a4060c22bdc4a23e.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.571500956Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/55c9b4792dc0b65671d9e4a935118dfed75c338ddaca3155a8daed9a23cce723.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.585480882Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/6cf7c80fe4444767f63116e6855bf8c90bddde8ef63d3a2dc9b86c74989a4eb5.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.608143716Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/8ae59e1bfb26013a6d1576053fc7458c00285b7f64ab5692ee663a0dc949124b.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.621686253Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/9be4f056f04b712392ba99b1ed941caf2bd6e8bd9da5a007f7270a1c38bf127d.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.637418160Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/caea073758499623969565911911ecc0297f1f38a9257bafd8bc08170aeeaec9.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.651605308Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/f9499facb1e8c3c907d82f50441b8da45115c4314a9c175c8b3797e489c1cf1e.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.667988827Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/manifest.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.681686993Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/repositories: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.735292141Z" level=info msg="Connecting to proxy" url="wss://10.0.0.1:6443/v1-k3s/connect"
time="2020-01-17T08:09:59.758118678Z" level=info msg="Handling backend connection request [mak3r-k3os]"
time="2020-01-17T08:09:59.760854049Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2020-01-17T08:09:59.764819938Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/81d6cc7e694228aa5c7660807dc235d0023eba04b407935a202018e9b905f655/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=mak3r-k3os --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd --node-labels=k3os.io/mode=local,k3os.io/version=v0.9.1 --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/systemd --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
time="2020-01-17T08:09:59.765587456Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=mak3r-k3os --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
W0117 08:09:59.766048    1798 server.go:213] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0117 08:09:59.794314    1798 server.go:412] Version: v1.17.2+k3s1
time="2020-01-17T08:09:59.910048382Z" level=info msg="Waiting for master node mak3r-k3os startup: nodes \"mak3r-k3os\" not found"
E0117 08:09:59.917595    1798 machine.go:331] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory
time="2020-01-17T08:09:59.924432641Z" level=info msg="waiting for node mak3r-k3os: nodes \"mak3r-k3os\" not found"
I0117 08:09:59.930771    1798 server.go:639] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0117 08:09:59.933355    1798 container_manager_linux.go:271] container manager verified user specified cgroup-root exists: []
I0117 08:09:59.933593    1798 container_manager_linux.go:276] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd SystemCgroupsName: KubeletCgroupsName:/systemd ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I0117 08:09:59.934516    1798 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
I0117 08:09:59.934583    1798 container_manager_linux.go:311] Creating device plugin manager: true
I0117 08:09:59.935359    1798 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/rancher/k3s/agent/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x26e7630 0x6702130 0x26e7e00 map[] map[] map[] map[] map[] 0x4011a3e330 [] 0x6702130}
I0117 08:09:59.935597    1798 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0117 08:09:59.952261    1798 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x6702130 10000000000 0x400ff74540 <nil> <nil> <nil> <nil> map[] 0x6702130}
I0117 08:09:59.952947    1798 kubelet.go:311] Watching apiserver
W0117 08:10:00.127019    1798 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".
W0117 08:10:00.127703    1798 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".
I0117 08:10:00.132806    1798 kuberuntime_manager.go:211] Container runtime containerd initialized, version: v1.3.3-k3s1, apiVersion: v1alpha2
W0117 08:10:00.134243    1798 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
E0117 08:10:00.134512    1798 plugins.go:599] Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system
I0117 08:10:00.135425    1798 server.go:1111] Started kubelet
I0117 08:10:00.137579    1798 server.go:143] Starting to listen on 0.0.0.0:10250
I0117 08:10:00.181888    1798 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
I0117 08:10:00.182459    1798 server.go:354] Adding debug handlers to kubelet server.
E0117 08:10:00.216862    1798 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0117 08:10:00.217298    1798 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I0117 08:10:00.239987    1798 volume_manager.go:265] Starting Kubelet Volume Manager
I0117 08:10:00.255545    1798 desired_state_of_world_populator.go:138] Desired state populator starts to run
I0117 08:10:00.358832    1798 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
E0117 08:10:00.365044    1798 kubelet.go:2263] node "mak3r-k3os" not found
I0117 08:10:00.379130    1798 kubelet_node_status.go:70] Attempting to register node mak3r-k3os
E0117 08:10:00.515096    1798 kubelet.go:2263] node "mak3r-k3os" not found
I0117 08:10:00.579310    1798 kubelet_node_status.go:73] Successfully registered node mak3r-k3os
I0117 08:10:00.631157    1798 cpu_manager.go:173] [cpumanager] starting with none policy
I0117 08:10:00.631370    1798 cpu_manager.go:174] [cpumanager] reconciling every 10s
I0117 08:10:00.631475    1798 policy_none.go:43] [cpumanager] none policy: Start
time="2020-01-17T08:10:00.668555604Z" level=info msg="couldn't find node internal ip label on node mak3r-k3os"
time="2020-01-17T08:10:00.668772548Z" level=info msg="couldn't find node hostname label on node mak3r-k3os"
I0117 08:10:00.716827    1798 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
time="2020-01-17T08:10:00.828211363Z" level=info msg="Updated coredns node hosts entry [10.0.0.1 mak3r-k3os]"
time="2020-01-17T08:10:00.843726122Z" level=info msg="couldn't find node internal ip label on node mak3r-k3os"
time="2020-01-17T08:10:00.844012789Z" level=info msg="couldn't find node hostname label on node mak3r-k3os"
I0117 08:10:00.844212    1798 node_controller.go:419] Successfully initialized node mak3r-k3os with cloud provider
I0117 08:10:00.892379    1798 node.go:135] Successfully retrieved node IP: 10.0.0.1
I0117 08:10:00.892580    1798 server_others.go:146] Using iptables Proxier.
I0117 08:10:00.899590    1798 server.go:571] Version: v1.17.2+k3s1
I0117 08:10:00.934817    1798 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0117 08:10:00.967581    1798 conntrack.go:52] Setting nf_conntrack_max to 131072
I0117 08:10:00.994760    1798 conntrack.go:83] Setting conntrack hashsize to 32768
I0117 08:10:01.027623    1798 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0117 08:10:01.028249    1798 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0117 08:10:01.030882    1798 config.go:313] Starting service config controller
I0117 08:10:01.031025    1798 shared_informer.go:197] Waiting for caches to sync for service config
I0117 08:10:01.031171    1798 config.go:131] Starting endpoints config controller
I0117 08:10:01.031221    1798 shared_informer.go:197] Waiting for caches to sync for endpoints config
W0117 08:10:01.053550    1798 manager.go:577] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
I0117 08:10:01.061784    1798 plugin_manager.go:114] Starting Kubelet Plugin Manager
I0117 08:10:01.365554    1798 shared_informer.go:204] Caches are synced for service config 
I0117 08:10:01.366185    1798 shared_informer.go:204] Caches are synced for endpoints config 
time="2020-01-17T08:10:01.383737048Z" level=info msg="master role label has been set succesfully on node: mak3r-k3os"
time="2020-01-17T08:10:01.951486103Z" level=info msg="waiting for node mak3r-k3os CIDR not assigned yet"
I0117 08:10:02.363882    1798 status_manager.go:157] Starting to sync pod status with apiserver
I0117 08:10:02.364249    1798 kubelet.go:1820] Starting kubelet main sync loop.
E0117 08:10:02.364577    1798 kubelet.go:1844] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful
I0117 08:10:02.659409    1798 reconciler.go:156] Reconciler: start to sync state
time="2020-01-17T08:10:03.967629509Z" level=info msg="waiting for node mak3r-k3os CIDR not assigned yet"
time="2020-01-17T08:10:05.991021082Z" level=info msg="waiting for node mak3r-k3os CIDR not assigned yet"
time="2020-01-17T08:10:08.006552970Z" level=info msg="waiting for node mak3r-k3os CIDR not assigned yet"
I0117 08:10:08.066863    1798 range_allocator.go:82] Sending events to api server.
I0117 08:10:08.067947    1798 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses.
I0117 08:10:08.068041    1798 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
I0117 08:10:08.068219    1798 controllermanager.go:533] Started "nodeipam"
I0117 08:10:08.068821    1798 node_ipam_controller.go:162] Starting ipam controller
I0117 08:10:08.068916    1798 shared_informer.go:197] Waiting for caches to sync for node
W0117 08:10:08.121285    1798 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
E0117 08:10:08.121561    1798 plugins.go:599] Error initializing dynamic plugin prober: Error (re-)creating driver directory: mkdir /usr/libexec/kubernetes: read-only file system
I0117 08:10:08.122636    1798 controllermanager.go:533] Started "attachdetach"
I0117 08:10:08.123455    1798 attach_detach_controller.go:342] Starting attach detach controller
I0117 08:10:08.129320    1798 shared_informer.go:197] Waiting for caches to sync for attach detach
E0117 08:10:08.615034    1798 resource_quota_controller.go:160] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0117 08:10:08.615391    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0117 08:10:08.615579    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0117 08:10:08.615738    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I0117 08:10:08.615915    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I0117 08:10:08.616259    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0117 08:10:08.616403    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0117 08:10:08.617049    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I0117 08:10:08.617228    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0117 08:10:08.617381    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0117 08:10:08.617519    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0117 08:10:08.617670    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0117 08:10:08.617833    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0117 08:10:08.619275    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I0117 08:10:08.619562    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I0117 08:10:08.619739    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I0117 08:10:08.619897    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0117 08:10:08.620091    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I0117 08:10:08.620905    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I0117 08:10:08.621071    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0117 08:10:08.621271    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0117 08:10:08.621480    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0117 08:10:08.621658    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0117 08:10:08.621828    1798 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I0117 08:10:08.621909    1798 controllermanager.go:533] Started "resourcequota"
I0117 08:10:08.626805    1798 resource_quota_controller.go:271] Starting resource quota controller
I0117 08:10:08.626954    1798 shared_informer.go:197] Waiting for caches to sync for resource quota
I0117 08:10:08.627074    1798 resource_quota_monitor.go:303] QuotaMonitor running
I0117 08:10:08.680063    1798 controllermanager.go:533] Started "deployment"
W0117 08:10:08.680422    1798 controllermanager.go:512] "bootstrapsigner" is disabled
W0117 08:10:08.680526    1798 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
W0117 08:10:08.680590    1798 controllermanager.go:525] Skipping "route"
I0117 08:10:08.681374    1798 deployment_controller.go:152] Starting deployment controller
I0117 08:10:08.681489    1798 shared_informer.go:197] Waiting for caches to sync for deployment
I0117 08:10:08.738276    1798 controllermanager.go:533] Started "replicationcontroller"
I0117 08:10:08.739253    1798 replica_set.go:180] Starting replicationcontroller controller
I0117 08:10:08.739346    1798 shared_informer.go:197] Waiting for caches to sync for ReplicationController
I0117 08:10:08.813009    1798 controllermanager.go:533] Started "serviceaccount"
I0117 08:10:08.813870    1798 serviceaccounts_controller.go:116] Starting service account controller
I0117 08:10:08.813983    1798 shared_informer.go:197] Waiting for caches to sync for service account
I0117 08:10:08.897204    1798 controllermanager.go:533] Started "job"
I0117 08:10:08.905831    1798 job_controller.go:143] Starting job controller
I0117 08:10:08.909503    1798 shared_informer.go:197] Waiting for caches to sync for job
I0117 08:10:08.976006    1798 controllermanager.go:533] Started "replicaset"
W0117 08:10:08.982904    1798 controllermanager.go:512] "tokencleaner" is disabled
I0117 08:10:08.977268    1798 replica_set.go:180] Starting replicaset controller
W0117 08:10:08.989131    1798 controllermanager.go:525] Skipping "endpointslice"
I0117 08:10:08.989308    1798 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
I0117 08:10:09.060509    1798 controllermanager.go:533] Started "disruption"
I0117 08:10:09.063012    1798 disruption.go:330] Starting disruption controller
I0117 08:10:09.063146    1798 shared_informer.go:197] Waiting for caches to sync for disruption
I0117 08:10:09.093167    1798 controllermanager.go:533] Started "csrcleaner"
I0117 08:10:09.094237    1798 cleaner.go:81] Starting CSR cleaner controller
E0117 08:10:09.165790    1798 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0117 08:10:09.169423    1798 controllermanager.go:525] Skipping "service"
I0117 08:10:09.223369    1798 controllermanager.go:533] Started "pvc-protection"
W0117 08:10:09.223560    1798 controllermanager.go:525] Skipping "root-ca-cert-publisher"
I0117 08:10:09.224447    1798 pvc_protection_controller.go:100] Starting PVC protection controller
I0117 08:10:09.224556    1798 shared_informer.go:197] Waiting for caches to sync for PVC protection
I0117 08:10:09.293217    1798 controllermanager.go:533] Started "podgc"
I0117 08:10:09.295757    1798 gc_controller.go:88] Starting GC controller
I0117 08:10:09.295916    1798 shared_informer.go:197] Waiting for caches to sync for GC
I0117 08:10:09.372537    1798 controllermanager.go:533] Started "persistentvolume-binder"
I0117 08:10:09.373467    1798 pv_controller_base.go:294] Starting persistent volume controller
I0117 08:10:09.373558    1798 shared_informer.go:197] Waiting for caches to sync for persistent volume
I0117 08:10:09.430553    1798 controllermanager.go:533] Started "persistentvolume-expander"
I0117 08:10:09.431585    1798 expand_controller.go:319] Starting expand controller
I0117 08:10:09.431678    1798 shared_informer.go:197] Waiting for caches to sync for expand
I0117 08:10:09.491610    1798 controllermanager.go:533] Started "clusterrole-aggregation"
I0117 08:10:09.492595    1798 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I0117 08:10:09.492695    1798 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
I0117 08:10:09.551981    1798 controllermanager.go:533] Started "statefulset"
I0117 08:10:09.558432    1798 stateful_set.go:145] Starting stateful set controller
I0117 08:10:09.564429    1798 shared_informer.go:197] Waiting for caches to sync for stateful set
I0117 08:10:09.590013    1798 controllermanager.go:533] Started "csrapproving"
I0117 08:10:09.590372    1798 certificate_controller.go:118] Starting certificate controller "csrapproving"
I0117 08:10:09.593307    1798 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving
I0117 08:10:09.672535    1798 node_lifecycle_controller.go:77] Sending events to api server
E0117 08:10:09.675468    1798 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
W0117 08:10:09.675676    1798 controllermanager.go:525] Skipping "cloud-node-lifecycle"
I0117 08:10:09.883220    1798 controllermanager.go:533] Started "horizontalpodautoscaling"
I0117 08:10:09.890832    1798 horizontal.go:156] Starting HPA controller
I0117 08:10:09.898849    1798 shared_informer.go:197] Waiting for caches to sync for HPA
I0117 08:10:09.969161    1798 controllermanager.go:533] Started "cronjob"
I0117 08:10:09.969800    1798 cronjob_controller.go:97] Starting CronJob Manager
I0117 08:10:10.021308    1798 node_lifecycle_controller.go:388] Sending events to api server.
I0117 08:10:10.022583    1798 node_lifecycle_controller.go:423] Controller is using taint based evictions.
I0117 08:10:10.026569    1798 taint_manager.go:162] Sending events to api server.
I0117 08:10:10.029174    1798 node_lifecycle_controller.go:520] Controller will reconcile labels.
I0117 08:10:10.032453    1798 controllermanager.go:533] Started "nodelifecycle"
I0117 08:10:10.033440    1798 node_lifecycle_controller.go:554] Starting node controller
I0117 08:10:10.033557    1798 shared_informer.go:197] Waiting for caches to sync for taint
time="2020-01-17T08:10:10.061415654Z" level=info msg="waiting for node mak3r-k3os CIDR not assigned yet"
I0117 08:10:10.188484    1798 controllermanager.go:533] Started "pv-protection"
I0117 08:10:10.189763    1798 pv_protection_controller.go:81] Starting PV protection controller
I0117 08:10:10.189889    1798 shared_informer.go:197] Waiting for caches to sync for PV protection
I0117 08:10:10.265125    1798 controllermanager.go:533] Started "endpoint"
I0117 08:10:10.269340    1798 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0117 08:10:10.270320    1798 endpoints_controller.go:181] Starting endpoint controller
I0117 08:10:10.270448    1798 shared_informer.go:197] Waiting for caches to sync for endpoint
I0117 08:10:10.407245    1798 shared_informer.go:204] Caches are synced for PV protection 
I0117 08:10:10.414126    1798 shared_informer.go:204] Caches are synced for namespace 
I0117 08:10:10.429662    1798 shared_informer.go:204] Caches are synced for service account 
W0117 08:10:10.497324    1798 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="mak3r-k3os" does not exist
I0117 08:10:10.526341    1798 shared_informer.go:204] Caches are synced for PVC protection 
I0117 08:10:10.527428    1798 shared_informer.go:204] Caches are synced for job 
I0117 08:10:10.548075    1798 shared_informer.go:204] Caches are synced for attach detach 
I0117 08:10:10.550849    1798 shared_informer.go:204] Caches are synced for ReplicationController 
I0117 08:10:10.551174    1798 shared_informer.go:204] Caches are synced for taint 
I0117 08:10:10.551439    1798 shared_informer.go:204] Caches are synced for expand 
I0117 08:10:10.551499    1798 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
W0117 08:10:10.551678    1798 node_lifecycle_controller.go:1058] Missing timestamp for Node mak3r-k3os. Assuming now as a timestamp.
I0117 08:10:10.551862    1798 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0117 08:10:10.566354    1798 taint_manager.go:186] Starting NoExecuteTaintManager
I0117 08:10:10.567242    1798 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"mak3r-k3os", UID:"35df266a-484a-4606-91ba-cc9866bb8697", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node mak3r-k3os event: Registered Node mak3r-k3os in Controller
I0117 08:10:10.568366    1798 shared_informer.go:204] Caches are synced for daemon sets 
I0117 08:10:10.584867    1798 shared_informer.go:204] Caches are synced for TTL 
I0117 08:10:10.588901    1798 shared_informer.go:204] Caches are synced for persistent volume 
I0117 08:10:10.589993    1798 shared_informer.go:204] Caches are synced for stateful set 
I0117 08:10:10.591881    1798 shared_informer.go:204] Caches are synced for node 
I0117 08:10:10.592078    1798 range_allocator.go:172] Starting range CIDR allocator
I0117 08:10:10.592129    1798 shared_informer.go:197] Waiting for caches to sync for cidrallocator
I0117 08:10:10.592179    1798 shared_informer.go:204] Caches are synced for cidrallocator 
E0117 08:10:10.596256    1798 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0117 08:10:10.596919    1798 shared_informer.go:197] Waiting for caches to sync for resource quota
I0117 08:10:10.597942    1798 shared_informer.go:204] Caches are synced for certificate-csrsigning 
I0117 08:10:10.598269    1798 shared_informer.go:204] Caches are synced for GC 
I0117 08:10:10.607111    1798 shared_informer.go:204] Caches are synced for HPA 
I0117 08:10:10.607563    1798 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I0117 08:10:10.675267    1798 range_allocator.go:373] Set node mak3r-k3os PodCIDR to [10.42.0.0/24]
I0117 08:10:10.682906    1798 shared_informer.go:204] Caches are synced for endpoint 
I0117 08:10:10.704912    1798 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I0117 08:10:10.713533    1798 kuberuntime_manager.go:981] updating runtime config through cri with podcidr 10.42.0.0/24
I0117 08:10:10.718971    1798 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.0.0/24
I0117 08:10:10.864062    1798 shared_informer.go:204] Caches are synced for garbage collector 
I0117 08:10:10.864238    1798 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0117 08:10:10.873012    1798 shared_informer.go:204] Caches are synced for disruption 
I0117 08:10:10.873162    1798 disruption.go:338] Sending events to api server.
I0117 08:10:10.873948    1798 shared_informer.go:204] Caches are synced for garbage collector 
I0117 08:10:10.896370    1798 shared_informer.go:204] Caches are synced for ReplicaSet 
I0117 08:10:10.904464    1798 shared_informer.go:204] Caches are synced for deployment 
I0117 08:10:10.911614    1798 shared_informer.go:204] Caches are synced for resource quota 
I0117 08:10:10.938719    1798 shared_informer.go:204] Caches are synced for resource quota 
I0117 08:10:11.242464    1798 event.go:281] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"helm-install-traefik", UID:"b1cb4897-8ad5-4e4c-bfe4-6729ee8c8346", APIVersion:"batch/v1", ResourceVersion:"236", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: helm-install-traefik-qphtv
I0117 08:10:11.589507    1798 controller.go:606] quota admission added evaluator for: replicasets.apps
I0117 08:10:11.590642    1798 controller.go:606] quota admission added evaluator for: events.events.k8s.io
I0117 08:10:11.598816    1798 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "helm-traefik-token-hhc5g" (UniqueName: "kubernetes.io/secret/3b66f5a0-bf7f-4007-bb24-e62938946328-helm-traefik-token-hhc5g") pod "helm-install-traefik-qphtv" (UID: "3b66f5a0-bf7f-4007-bb24-e62938946328") 
E0117 08:10:11.665065    1798 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0117 08:10:11.887250    1798 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"k3os-system", Name:"system-upgrade-controller", UID:"db6571e4-5308-4f9c-90c2-1c6eafa25ae5", APIVersion:"apps/v1", ResourceVersion:"228", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set system-upgrade-controller-84b4b86fd7 to 1
I0117 08:10:11.906460    1798 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"local-path-provisioner", UID:"8049577b-08b9-484e-9239-d42ea3e40ca8", APIVersion:"apps/v1", ResourceVersion:"187", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set local-path-provisioner-58fb86bdfd to 1
I0117 08:10:11.908819    1798 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"42fd68d0-0d3f-41ae-ae69-31d1f67cad78", APIVersion:"apps/v1", ResourceVersion:"175", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-d798c9dd to 1
I0117 08:10:11.909076    1798 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"metrics-server", UID:"d6d119ae-6a5d-499c-9d99-a9e4c0dd7e25", APIVersion:"apps/v1", ResourceVersion:"203", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set metrics-server-6d684c7b5 to 1
E0117 08:10:12.105323    1798 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0117 08:10:12.321771    1798 flannel.go:92] Determining IP address of default interface
I0117 08:10:12.355482    1798 flannel.go:105] Using interface with name eth0 and address 10.0.0.1
I0117 08:10:12.444231    1798 kube.go:117] Waiting 10m0s for node controller to sync
I0117 08:10:12.445231    1798 kube.go:300] Starting kube subnet manager
I0117 08:10:12.611831    1798 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-d798c9dd", UID:"21193415-cd92-4884-9f5e-520e0b860051", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-d798c9dd-27hq6
I0117 08:10:12.646408    1798 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"local-path-provisioner-58fb86bdfd", UID:"9fa4f5e7-8088-42b1-8eae-3253e648b44b", APIVersion:"apps/v1", ResourceVersion:"344", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: local-path-provisioner-58fb86bdfd-kpmdt
I0117 08:10:12.767937    1798 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-6d684c7b5", UID:"4367b5bc-c000-4ac1-ba65-e79088143244", APIVersion:"apps/v1", ResourceVersion:"343", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-6d684c7b5-slpr2
I0117 08:10:12.802516    1798 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"k3os-system", Name:"system-upgrade-controller-84b4b86fd7", UID:"0e4df4b2-15fa-455b-a33a-256111c69202", APIVersion:"apps/v1", ResourceVersion:"345", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: system-upgrade-controller-84b4b86fd7-zwh2c
time="2020-01-17T08:10:12.818275263Z" level=info msg="labels have been set successfully on node: mak3r-k3os"
I0117 08:10:12.946629    1798 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ffa4c5b8-4d34-4c47-8972-dceb2d43eab0-config-volume") pod "local-path-provisioner-58fb86bdfd-kpmdt" (UID: "ffa4c5b8-4d34-4c47-8972-dceb2d43eab0") 
I0117 08:10:12.947006    1798 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "local-path-provisioner-service-account-token-s54w8" (UniqueName: "kubernetes.io/secret/ffa4c5b8-4d34-4c47-8972-dceb2d43eab0-local-path-provisioner-service-account-token-s54w8") pod "local-path-provisioner-58fb86bdfd-kpmdt" (UID: "ffa4c5b8-4d34-4c47-8972-dceb2d43eab0") 
I0117 08:10:13.132023    1798 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-6jb2j" (UniqueName: "kubernetes.io/secret/e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd-metrics-server-token-6jb2j") pod "metrics-server-6d684c7b5-slpr2" (UID: "e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd") 
I0117 08:10:13.164995    1798 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1d88747c-2351-49a7-8716-d0342a022f46-config-volume") pod "coredns-d798c9dd-27hq6" (UID: "1d88747c-2351-49a7-8716-d0342a022f46") 
I0117 08:10:13.165310    1798 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-cb8gz" (UniqueName: "kubernetes.io/secret/1d88747c-2351-49a7-8716-d0342a022f46-coredns-token-cb8gz") pod "coredns-d798c9dd-27hq6" (UID: "1d88747c-2351-49a7-8716-d0342a022f46") 
I0117 08:10:13.165527    1798 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd-tmp-dir") pod "metrics-server-6d684c7b5-slpr2" (UID: "e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd") 
I0117 08:10:13.514310    1798 kube.go:124] Node controller sync successful
I0117 08:10:13.515115    1798 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0117 08:10:13.564175    1798 trace.go:116] Trace[1937566763]: "Create" url:/apis/events.k8s.io/v1beta1/namespaces/kube-system/events,user-agent:k3s/v1.17.2+k3s1 (linux/arm64) kubernetes/cdab19b/scheduler,client:127.0.0.1 (started: 2020-01-17 08:10:13.028593096 +0000 UTC m=+48.614558584) (total time: 535.365259ms):
Trace[1937566763]: [535.053703ms] [526.967092ms] Object stored in database
I0117 08:10:13.575644    1798 trace.go:116] Trace[1526483420]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/local-path-provisioner-58fb86bdfd/status,user-agent:k3s/v1.17.2+k3s1 (linux/arm64) kubernetes/cdab19b/system:serviceaccount:kube-system:replicaset-controller,client:127.0.0.1 (started: 2020-01-17 08:10:13.039421726 +0000 UTC m=+48.625387195) (total time: 536.040296ms):
Trace[1526483420]: [535.629036ms] [470.984407ms] Object stored in database
I0117 08:10:13.707294    1798 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/empty-dir/50581e1a-24db-4fbb-8a20-b83aeaf2e91a-tmp") pod "system-upgrade-controller-84b4b86fd7-zwh2c" (UID: "50581e1a-24db-4fbb-8a20-b83aeaf2e91a") 
I0117 08:10:13.740059    1798 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ssl" (UniqueName: "kubernetes.io/host-path/50581e1a-24db-4fbb-8a20-b83aeaf2e91a-etc-ssl") pod "system-upgrade-controller-84b4b86fd7-zwh2c" (UID: "50581e1a-24db-4fbb-8a20-b83aeaf2e91a") 
I0117 08:10:13.740488    1798 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k3os-upgrade-token-xmlq8" (UniqueName: "kubernetes.io/secret/50581e1a-24db-4fbb-8a20-b83aeaf2e91a-k3os-upgrade-token-xmlq8") pod "system-upgrade-controller-84b4b86fd7-zwh2c" (UID: "50581e1a-24db-4fbb-8a20-b83aeaf2e91a") 
I0117 08:10:14.249879    1798 trace.go:116] Trace[1000503490]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2020-01-17 08:10:13.730567374 +0000 UTC m=+49.316533009) (total time: 518.875407ms):
Trace[1000503490]: [517.799852ms] [515.055815ms] Transaction committed
I0117 08:10:14.251399    1798 trace.go:116] Trace[1291254693]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:k3s/v1.17.2+k3s1 (linux/arm64) kubernetes/cdab19b/system:serviceaccount:kube-system:deployment-controller,client:127.0.0.1 (started: 2020-01-17 08:10:13.715549337 +0000 UTC m=+49.301514824) (total time: 535.583259ms):
Trace[1291254693]: [534.912963ms] [533.867093ms] Object stored in database
I0117 08:10:14.288766    1798 trace.go:116] Trace[724243005]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-01-17 08:10:13.497093744 +0000 UTC m=+49.083059306) (total time: 791.429795ms):
Trace[724243005]: [192.564536ms] [192.564536ms] initial value restored
Trace[724243005]: [395.863351ms] [203.298815ms] Transaction prepared
Trace[724243005]: [791.317536ms] [395.454185ms] Transaction committed
I0117 08:10:14.413419    1798 network_policy_controller.go:146] Starting network policy controller
I0117 08:10:14.575749    1798 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env
I0117 08:10:14.607573    1798 flannel.go:82] Running backend.
I0117 08:10:14.607874    1798 vxlan_network.go:60] watching for new subnet leases
I0117 08:10:15.123013    1798 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0117 08:10:15.123271    1798 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0117 08:10:15.171760    1798 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0117 08:10:15.179164    1798 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0117 08:10:15.330897    1798 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
E0117 08:10:15.372021    1798 memcache.go:199] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0117 08:10:15.440594    1798 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
E0117 08:10:15.457444    1798 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0117 08:10:15.557502    1798 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0117 08:10:15.586224    1798 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0117 08:10:15.616073    1798 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
I0117 08:10:15.773539    1798 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0117 08:10:15.979171    1798 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0117 08:10:16.013363    1798 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0117 08:10:16.285909    1798 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0117 08:10:16.561993    1798 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN
I0117 08:10:16.741711    1798 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
W0117 08:10:16.751249    1798 handler_proxy.go:97] no RequestInfo found in the context
E0117 08:10:16.751967    1798 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0117 08:10:16.752216    1798 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0117 08:10:16.785966    1798 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
E0117 08:10:17.127753    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.128502    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.128624    1798 kuberuntime_manager.go:729] createPodSandbox for pod "helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.129293    1798 pod_workers.go:191] Error syncing pod 3b66f5a0-bf7f-4007-bb24-e62938946328 ("helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)"), skipping: failed to "CreatePodSandbox" for "helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)" with CreatePodSandboxError: "CreatePodSandbox for pod \"helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:17.132178    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.132468    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.132674    1798 kuberuntime_manager.go:729] createPodSandbox for pod "coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.133038    1798 pod_workers.go:191] Error syncing pod 1d88747c-2351-49a7-8716-d0342a022f46 ("coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)"), skipping: failed to "CreatePodSandbox" for "coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:17.138423    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.138991    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.139144    1798 kuberuntime_manager.go:729] createPodSandbox for pod "metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.139476    1798 pod_workers.go:191] Error syncing pod e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd ("metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)"), skipping: failed to "CreatePodSandbox" for "metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:17.147615    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.147956    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.148061    1798 kuberuntime_manager.go:729] createPodSandbox for pod "local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.148401    1798 pod_workers.go:191] Error syncing pod ffa4c5b8-4d34-4c47-8972-dceb2d43eab0 ("local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:17.152861    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.153173    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.153348    1798 kuberuntime_manager.go:729] createPodSandbox for pod "system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:17.153855    1798 pod_workers.go:191] Error syncing pod 50581e1a-24db-4fbb-8a20-b83aeaf2e91a ("system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)"), skipping: failed to "CreatePodSandbox" for "system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)" with CreatePodSandboxError: "CreatePodSandbox for pod \"system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:17.641647168Z" level=error msg="failed to process config: failed to process /var/lib/rancher/k3s/server/manifests/system-upgrade-plans/k3os-latest.yaml: the server could not find the requested resource"
time="2020-01-17T08:10:33.728107362Z" level=error msg="failed to process config: failed to process /var/lib/rancher/k3s/server/manifests/system-upgrade-plans/k3os-latest.yaml: the server could not find the requested resource"
E0117 08:10:34.391431    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.392724    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.392881    1798 kuberuntime_manager.go:729] createPodSandbox for pod "system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.393575    1798 pod_workers.go:191] Error syncing pod 50581e1a-24db-4fbb-8a20-b83aeaf2e91a ("system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)"), skipping: failed to "CreatePodSandbox" for "system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)" with CreatePodSandboxError: "CreatePodSandbox for pod \"system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:34.400470    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.400694    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.400789    1798 kuberuntime_manager.go:729] createPodSandbox for pod "metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.401100    1798 pod_workers.go:191] Error syncing pod e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd ("metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)"), skipping: failed to "CreatePodSandbox" for "metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:34.406045    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.406749    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.406868    1798 kuberuntime_manager.go:729] createPodSandbox for pod "coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.407629    1798 pod_workers.go:191] Error syncing pod 1d88747c-2351-49a7-8716-d0342a022f46 ("coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)"), skipping: failed to "CreatePodSandbox" for "coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:34.412739    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.413127    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.413282    1798 kuberuntime_manager.go:729] createPodSandbox for pod "helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.413595    1798 pod_workers.go:191] Error syncing pod 3b66f5a0-bf7f-4007-bb24-e62938946328 ("helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)"), skipping: failed to "CreatePodSandbox" for "helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)" with CreatePodSandboxError: "CreatePodSandbox for pod \"helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:34.417646    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.417946    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.418033    1798 kuberuntime_manager.go:729] createPodSandbox for pod "local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:34.418384    1798 pod_workers.go:191] Error syncing pod ffa4c5b8-4d34-4c47-8972-dceb2d43eab0 ("local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:41.318353    1798 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0117 08:10:43.179405    1798 garbagecollector.go:639] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
time="2020-01-17T08:10:49.781520093Z" level=error msg="failed to process config: failed to process /var/lib/rancher/k3s/server/manifests/system-upgrade-plans/k3os-latest.yaml: the server could not find the requested resource"
E0117 08:10:51.396035    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.396517    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.396654    1798 kuberuntime_manager.go:729] createPodSandbox for pod "coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.397332    1798 pod_workers.go:191] Error syncing pod 1d88747c-2351-49a7-8716-d0342a022f46 ("coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)"), skipping: failed to "CreatePodSandbox" for "coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-d798c9dd-27hq6_kube-system(1d88747c-2351-49a7-8716-d0342a022f46)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:51.402208    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.402446    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.402538    1798 kuberuntime_manager.go:729] createPodSandbox for pod "system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.402897    1798 pod_workers.go:191] Error syncing pod 50581e1a-24db-4fbb-8a20-b83aeaf2e91a ("system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)"), skipping: failed to "CreatePodSandbox" for "system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)" with CreatePodSandboxError: "CreatePodSandbox for pod \"system-upgrade-controller-84b4b86fd7-zwh2c_k3os-system(50581e1a-24db-4fbb-8a20-b83aeaf2e91a)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:51.411612    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.411942    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.412037    1798 kuberuntime_manager.go:729] createPodSandbox for pod "helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.412437    1798 pod_workers.go:191] Error syncing pod 3b66f5a0-bf7f-4007-bb24-e62938946328 ("helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)"), skipping: failed to "CreatePodSandbox" for "helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)" with CreatePodSandboxError: "CreatePodSandbox for pod \"helm-install-traefik-qphtv_kube-system(3b66f5a0-bf7f-4007-bb24-e62938946328)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:51.417615    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.417850    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.417936    1798 kuberuntime_manager.go:729] createPodSandbox for pod "local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.418345    1798 pod_workers.go:191] Error syncing pod ffa4c5b8-4d34-4c47-8972-dceb2d43eab0 ("local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)"), skipping: failed to "CreatePodSandbox" for "local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)" with CreatePodSandboxError: "CreatePodSandbox for pod \"local-path-provisioner-58fb86bdfd-kpmdt_kube-system(ffa4c5b8-4d34-4c47-8972-dceb2d43eab0)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
E0117 08:10:51.423192    1798 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.423468    1798 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.423559    1798 kuberuntime_manager.go:729] createPodSandbox for pod "metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)" failed: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
E0117 08:10:51.423899    1798 pod_workers.go:191] Error syncing pod e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd ("metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)"), skipping: failed to "CreatePodSandbox" for "metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"metrics-server-6d684c7b5-slpr2_kube-system(e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd)\" failed: rpc error: code = Unknown desc = failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"

containerd.log

time="2020-01-17T08:09:58.538480290Z" level=info msg="starting containerd" revision= version=v1.3.3-k3s1
time="2020-01-17T08:09:58.849500883Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2020-01-17T08:09:58.850437309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2020-01-17T08:09:58.851032623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2020-01-17T08:09:58.852228772Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2020-01-17T08:09:58.852635846Z" level=info msg="metadata content store policy set" policy=shared
time="2020-01-17T08:09:58.877199660Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2020-01-17T08:09:58.877460253Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2020-01-17T08:09:58.877808716Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2020-01-17T08:09:58.877955253Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2020-01-17T08:09:58.878118512Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2020-01-17T08:09:58.878255586Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2020-01-17T08:09:58.878393586Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
time="2020-01-17T08:09:58.878541475Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2020-01-17T08:09:58.878654179Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2020-01-17T08:09:58.878763327Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2020-01-17T08:09:58.879677716Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2020-01-17T08:09:58.880519068Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
time="2020-01-17T08:09:58.883733734Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
time="2020-01-17T08:09:58.883971253Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
time="2020-01-17T08:09:58.884301272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.884431457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.884538309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.884638957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.884737012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.884841179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.884939512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.885037512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.885138142Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
time="2020-01-17T08:09:58.885934920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.886196272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.886317994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.886416475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.887215197Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false} Runtimes:map[runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false}] NoPivot:false} CniConfig:{NetworkPluginBinDir:/var/lib/rancher/k3s/data/81d6cc7e694228aa5c7660807dc235d0023eba04b407935a202018e9b905f655/bin NetworkPluginConfDir:/var/lib/rancher/k3s/agent/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[]} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SandboxImage:docker.io/rancher/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false} ContainerdRootDir:/var/lib/rancher/k3s/agent/containerd ContainerdEndpoint:/run/k3s/containerd/containerd.sock RootDir:/var/lib/rancher/k3s/agent/containerd/io.containerd.grpc.v1.cri StateDir:/run/k3s/containerd/io.containerd.grpc.v1.cri}"
time="2020-01-17T08:09:58.887589790Z" level=info msg="Connect containerd service"
time="2020-01-17T08:09:58.888835031Z" level=info msg="Get image filesystem path \"/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs\""
time="2020-01-17T08:09:58.895515938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
time="2020-01-17T08:09:58.895853642Z" level=info msg="Start subscribing containerd event"
time="2020-01-17T08:09:58.896602957Z" level=info msg="Start recovering state"
time="2020-01-17T08:09:58.897034679Z" level=info msg=serving... address=/run/k3s/containerd/containerd.sock.ttrpc
time="2020-01-17T08:09:58.897392920Z" level=info msg=serving... address=/run/k3s/containerd/containerd.sock
time="2020-01-17T08:09:58.897527327Z" level=info msg="Start event monitor"
time="2020-01-17T08:09:58.897644438Z" level=info msg="Start snapshots syncer"
time="2020-01-17T08:09:58.897715975Z" level=info msg="Start streaming server"
time="2020-01-17T08:09:58.897790086Z" level=info msg="containerd successfully booted in 0.364291s"
time="2020-01-17T08:10:10.714966709Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
time="2020-01-17T08:10:12.099234690Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:helm-install-traefik-qphtv,Uid:3b66f5a0-bf7f-4007-bb24-e62938946328,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:13.697638633Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:13.741603022Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:13.974670077Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:metrics-server-6d684c7b5-slpr2,Uid:e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:14.541056836Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,}"
time="2020-01-17T08:10:17.125083187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:helm-install-traefik-qphtv,Uid:3b66f5a0-bf7f-4007-bb24-e62938946328,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:17.131229224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:17.137084501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6d684c7b5-slpr2,Uid:e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:17.146048261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:17.151934742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:29.367312827Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:29.367713698Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:metrics-server-6d684c7b5-slpr2,Uid:e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:30.367562419Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:helm-install-traefik-qphtv,Uid:3b66f5a0-bf7f-4007-bb24-e62938946328,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:30.367594734Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:32.367708918Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,}"
time="2020-01-17T08:10:34.390628361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:34.399279695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6d684c7b5-slpr2,Uid:e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:34.404942769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:34.411127250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:helm-install-traefik-qphtv,Uid:3b66f5a0-bf7f-4007-bb24-e62938946328,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:34.416872065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:46.367556132Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:48.367603594Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:metrics-server-6d684c7b5-slpr2,Uid:e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:48.367603835Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:helm-install-traefik-qphtv,Uid:3b66f5a0-bf7f-4007-bb24-e62938946328,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:49.367696834Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:10:49.367696741Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,}"
time="2020-01-17T08:10:51.394860685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:51.401227333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:51.410390018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:helm-install-traefik-qphtv,Uid:3b66f5a0-bf7f-4007-bb24-e62938946328,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:51.416679629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:10:51.422287277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6d684c7b5-slpr2,Uid:e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:04.367708844Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:11:04.369837844Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:metrics-server-6d684c7b5-slpr2,Uid:e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:11:05.367872639Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,}"
time="2020-01-17T08:11:05.371922769Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:helm-install-traefik-qphtv,Uid:3b66f5a0-bf7f-4007-bb24-e62938946328,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:11:06.390197750Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:11:09.396022378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:09.401184507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:09.409012915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:09.414464433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:helm-install-traefik-qphtv,Uid:3b66f5a0-bf7f-4007-bb24-e62938946328,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:09.418856729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6d684c7b5-slpr2,Uid:e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:20.367466464Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:11:21.367593963Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:11:21.367593852Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:metrics-server-6d684c7b5-slpr2,Uid:e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:11:23.367582406Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:helm-install-traefik-qphtv,Uid:3b66f5a0-bf7f-4007-bb24-e62938946328,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:11:23.369035851Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,}"
time="2020-01-17T08:11:25.390834442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:25.397158405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:25.405671905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6d684c7b5-slpr2,Uid:e74f5bb7-a4c5-4ad5-93cc-7189a891c8bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:25.412666220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:helm-install-traefik-qphtv,Uid:3b66f5a0-bf7f-4007-bb24-e62938946328,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:25.418489035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"docker.io/rancher/pause:3.1\": failed to pull image \"docker.io/rancher/pause:3.1\": failed to pull and unpack image \"docker.io/rancher/pause:3.1\": failed to resolve reference \"docker.io/rancher/pause:3.1\": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again"
time="2020-01-17T08:11:37.367725194Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-d798c9dd-27hq6,Uid:1d88747c-2351-49a7-8716-d0342a022f46,Namespace:kube-system,Attempt:0,}"
time="2020-01-17T08:11:37.367742805Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:system-upgrade-controller-84b4b86fd7-zwh2c,Uid:50581e1a-24db-4fbb-8a20-b83aeaf2e91a,Namespace:k3os-system,Attempt:0,}"
time="2020-01-17T08:11:37.369958861Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:local-path-provisioner-58fb86bdfd-kpmdt,Uid:ffa4c5b8-4d34-4c47-8972-dceb2d43eab0,Namespace:kube-system,Attempt:0,}"

config.yaml

ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHVZtKeG7C1Cm ...
write_files:
- encoding: ""
  content: |-
    #!/bin/bash
    echo hi
  owner: root
  path: /etc/rc.local
  permissions: '0755'
- content: |-
    auto eth0
    iface eth0 inet static
      address 10.0.0.1
      netmask 255.255.255.0
      network 10.0.0.0
      broadcast 10.0.0.255
      gateway 10.0.0.1
      dns-nameservers 10.0.0.1 

    auto wlan0
    iface wlan0 inet static
      address 192.168.8.33
      owner: root
      netmask 255.255.255.0
      network 192.168.8.0
      broadcast 192.168.8.255
      gateway 192.168.86.1
      dns-nameservers 192.168.86.1 8.8.8.8 1.1.1.1 
  path: /etc/network/interfaces
  permissions: '0644'
- content: |-
    connmanctl enable wifi
    SSID="MYSSID"
    connmanctl scan wifi
    svc=$(connmanctl services | grep $SSID | awk '{print $2}')
    connmanctl agent on
    connmanctl connect "$svc"
  owner: root
  path: /usr/share/bin/start-wifi.sh
  permissions: '0644'
hostname: mak3r-k3os
run_cmd:
- "ifdown eth0 && ifup eth0"
boot_cmd:
- "ln -sf /etc/init.d/swclock /etc/runlevels/boot/swclock"
init_cmd:
- "echo not starting:/usr/local/bin/start-wifi.sh"

k3os:
  dns_nameservers:
  - 10.0.0.1
  wifi:
  - name: none
    password: abc123
  password: rancher
  k3s_args:
    - server
    - "--advertise-address=10.0.0.1"
  environment:
    INSTALL_K3S_SKIP_DOWNLOAD: true
dweomer commented 4 years ago

@mak3r no need to unpack the tarballs, just drop them as-is under /var/lib/rancher/k3s/agent/images:

time="2020-01-17T08:09:59.554307179Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/1cdb7e2bd5e25744fb1c898242b764f88bd9f017726d6f17a4060c22bdc4a23e.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.571500956Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/55c9b4792dc0b65671d9e4a935118dfed75c338ddaca3155a8daed9a23cce723.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.585480882Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/6cf7c80fe4444767f63116e6855bf8c90bddde8ef63d3a2dc9b86c74989a4eb5.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.608143716Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/8ae59e1bfb26013a6d1576053fc7458c00285b7f64ab5692ee663a0dc949124b.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.621686253Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/9be4f056f04b712392ba99b1ed941caf2bd6e8bd9da5a007f7270a1c38bf127d.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.637418160Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/caea073758499623969565911911ecc0297f1f38a9257bafd8bc08170aeeaec9.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.651605308Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/f9499facb1e8c3c907d82f50441b8da45115c4314a9c175c8b3797e489c1cf1e.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.667988827Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/manifest.json: archive/tar: invalid tar header"
time="2020-01-17T08:09:59.681686993Z" level=error msg="Unable to import /var/lib/rancher/k3s/agent/images/repositories: archive/tar: invalid tar header"

I don't believe that /var/lib/rancher/k3s/server/images is ever scanned by k3s.

mak3r commented 4 years ago

I don't believe that /var/lib/rancher/k3s/server/images is ever scanned by k3s. When I use the k3s airgap images on a single node I find that it will only come up when I drop the airgap tarball in both the server/images and agent/images.

In any case, I was able to reproduce the issue because there are amd manifests in the arm64 airgap tarball in k3s. It's basically this issue https://github.com/rancher/k3s/issues/1285 which the team is looking into.

I think we should close this as it appears to be a k3s issue not k3os