kubernetes-sigs / kind

Kubernetes IN Docker - local clusters for testing Kubernetes
https://kind.sigs.k8s.io/
Apache License 2.0
13.47k stars 1.56k forks source link

using kind with gitlab kuberenetes executor fails #379

Closed swachter closed 5 years ago

swachter commented 5 years ago

I try to create a kind cluster for integration test in a gitlab-ci job. Cluster creation fails:

Creating cluster "kind" ...
 • Ensuring node image (kindest/node:v1.13.3) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.13.3) 🖼
 • Preparing nodes 📦  ...
 ✓ Preparing nodes 📦
 • Creating kubeadm config 📜  ...
 ✓ Creating kubeadm config 📜
 • Starting control-plane 🕹️  ...
 ✗ Starting control-plane 🕹️
Error: failed to create cluster: failed to init node with kubeadm: exit status 1
cluster creation failed

In the dumped logs there are two entries that may give a clue on the reason.

In docker.log:

Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.790818335Z" level=warning msg="Could not register builder git source: failed to find git binary: exec: \"git\": executable file not found in $PATH"

In kubelet.log:

Mar 13 13:44:45 kind-control-plane kubelet[464]: F0313 13:44:45.457729     464 server.go:189] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
swachter commented 5 years ago

This is the complete content of the dumped logs:

##### ldir/docker-info.txt
Containers: 17
 Running: 17
 Paused: 0
 Stopped: 0
Images: 50
Server Version: 17.03.2-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: 
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 595e75c212d19a81d2b808a518fe1afc1391dad5 (expected: 4ab9917febca54791c5f071a9d1f404867857fcc)
runc version: 54296cf (expected: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe)
init version: v0.13.0 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.14.65+
Operating System: Container-Optimized OS from Google
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 12.72GiB
Name: gke-usu-manage-saas-cont-runner-pool2-647369da-0qlw
ID: 2JB2:5PPU:SECK:RFL7:HW7Y:5R62:LYEQ:LN7V:AQTH:4ROS:TPAJ:B55Z
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 10.0.0.0/8
 127.0.0.0/8
Registry Mirrors:
 https://mirror.gcr.io/
 https://mirror.gcr.io/
Live Restore Enabled: true

##### ldir/kind-control-plane/journal.log
-- Logs begin at Wed 2019-03-13 13:44:34 UTC, end at Wed 2019-03-13 13:44:45 UTC. --
Mar 13 13:44:34 kind-control-plane systemd-journald[60]: Journal started
Mar 13 13:44:34 kind-control-plane systemd-journald[60]: Runtime journal (/run/log/journal/d94d286725d7441984a405e3141dbb25) is 8.0M, max 651.1M, 643.1M free.
Mar 13 13:44:34 kind-control-plane systemd[1]: Starting Flush Journal to Persistent Storage...
Mar 13 13:44:34 kind-control-plane systemd[1]: Started Update UTMP about System Boot/Shutdown.
Mar 13 13:44:34 kind-control-plane systemd[1]: Started Create Static Device Nodes in /dev.
Mar 13 13:44:34 kind-control-plane systemd-sysctl[71]: Couldn't write 'fq_codel' to 'net/core/default_qdisc', ignoring: No such file or directory
Mar 13 13:44:34 kind-control-plane systemd[1]: Started Apply Kernel Variables.
Mar 13 13:44:34 kind-control-plane systemd[1]: Reached target System Initialization.
Mar 13 13:44:34 kind-control-plane systemd[1]: Starting Docker Socket for the API.
Mar 13 13:44:34 kind-control-plane systemd[1]: Started Daily Cleanup of Temporary Directories.
Mar 13 13:44:34 kind-control-plane systemd[1]: Reached target Timers.
Mar 13 13:44:34 kind-control-plane systemd[1]: Listening on Docker Socket for the API.
Mar 13 13:44:34 kind-control-plane systemd[1]: Reached target Sockets.
Mar 13 13:44:34 kind-control-plane systemd[1]: Reached target Basic System.
Mar 13 13:44:34 kind-control-plane systemd[1]: Starting Docker Application Container Engine...
Mar 13 13:44:34 kind-control-plane systemd[1]: Started kubelet: The Kubernetes Node Agent.
Mar 13 13:44:35 kind-control-plane systemd-journald[60]: Runtime journal (/run/log/journal/d94d286725d7441984a405e3141dbb25) is 8.0M, max 651.1M, 643.1M free.
Mar 13 13:44:35 kind-control-plane systemd[1]: Started Flush Journal to Persistent Storage.
Mar 13 13:44:35 kind-control-plane kubelet[75]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 13 13:44:35 kind-control-plane kubelet[75]: F0313 13:44:35.236760      75 server.go:189] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.243126034Z" level=info msg="libcontainerd: started new docker-containerd process" pid=101
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.243944072Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.244106090Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.244294463Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.244432997Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.244730710Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201f3f30, CONNECTING" module=grpc
Mar 13 13:44:35 kind-control-plane systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 13 13:44:35 kind-control-plane systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="starting containerd" revision=468a545b9edcd5932818eb9de8e72413e616e86e version=v1.1.2
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.14.65+\n": exit status 1"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.14.65+\n": exit status 1"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="containerd successfully booted in 0.025973s"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.359617843Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201f3f30, READY" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.364235567Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.364266224Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.364339244Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.364381648Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.364436862Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420342870, CONNECTING" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.366022689Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420342870, READY" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.419723384Z" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.421165912Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.421557101Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.421834305Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.421984914Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.422178872Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201b9370, CONNECTING" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.422593461Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201b9370, READY" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.423097413Z" level=info msg="Loading containers: start."
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.600548997Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.706633047Z" level=info msg="Loading containers: done."
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.769053756Z" level=info msg="Docker daemon" commit=6d37f41 graphdriver(s)=overlay2 version=18.06.2-ce
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.769382962Z" level=info msg="Daemon has completed initialization"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.790818335Z" level=warning msg="Could not register builder git source: failed to find git binary: exec: \"git\": executable file not found in $PATH"
Mar 13 13:44:35 kind-control-plane systemd[1]: Started Docker Application Container Engine.
Mar 13 13:44:35 kind-control-plane systemd[1]: Reached target Multi-User System.
Mar 13 13:44:35 kind-control-plane systemd[1]: Reached target Graphical Interface.
Mar 13 13:44:35 kind-control-plane systemd[1]: Starting Update UTMP about System Runlevel Changes...
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.806381178Z" level=info msg="API listen on /var/run/docker.sock"
Mar 13 13:44:35 kind-control-plane systemd[1]: Started Update UTMP about System Runlevel Changes.
Mar 13 13:44:35 kind-control-plane systemd[1]: Startup finished in 1.078s.
Mar 13 13:44:45 kind-control-plane systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Mar 13 13:44:45 kind-control-plane systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Mar 13 13:44:45 kind-control-plane systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Mar 13 13:44:45 kind-control-plane systemd[1]: Started kubelet: The Kubernetes Node Agent.
Mar 13 13:44:45 kind-control-plane kubelet[464]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 13 13:44:45 kind-control-plane kubelet[464]: F0313 13:44:45.457729     464 server.go:189] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Mar 13 13:44:45 kind-control-plane systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 13 13:44:45 kind-control-plane systemd[1]: kubelet.service: Failed with result 'exit-code'.
##### ldir/kind-control-plane/inspect.json
[
    {
        "Id": "675304f583535d27ef74fd98999f45053c25ee6aa19a316e7238479e85924b65",
        "Created": "2019-03-13T13:44:33.856848072Z",
        "Path": "/usr/local/bin/entrypoint",
        "Args": [
            "/sbin/init"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 1333948,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2019-03-13T13:44:34.167309517Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:f30a33de05b8019344de1313f03ce938d77196195e25a765c654e617b2f56335",
        "ResolvConfPath": "/var/lib/docker/containers/675304f583535d27ef74fd98999f45053c25ee6aa19a316e7238479e85924b65/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/675304f583535d27ef74fd98999f45053c25ee6aa19a316e7238479e85924b65/hostname",
        "HostsPath": "/var/lib/docker/containers/675304f583535d27ef74fd98999f45053c25ee6aa19a316e7238479e85924b65/hosts",
        "LogPath": "/var/lib/docker/containers/675304f583535d27ef74fd98999f45053c25ee6aa19a316e7238479e85924b65/675304f583535d27ef74fd98999f45053c25ee6aa19a316e7238479e85924b65-json.log",
        "Name": "/kind-control-plane",
        "RestartCount": 0,
        "Driver": "overlay2",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": [
            "1fa704ff027d066f9792eb8744c38c4ca91e26d566cc1668ec740b9cbcbb5e41",
            "a54dc6322ef88e46e3d8cd233eaacb77377a32309676565378ba7da5623bac56",
            "7b7dd1921d1a517006862b3a699ce92c9c892bdab68eb23acf6a676b6decc504"
        ],
        "HostConfig": {
            "Binds": [
                "/lib/modules:/lib/modules:ro"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {
                    "max-file": "5",
                    "max-size": "10m"
                }
            },
            "NetworkMode": "default",
            "PortBindings": {
                "6443/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "40985"
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": true,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "seccomp=unconfined"
            ],
            "Tmpfs": {
                "/run": "",
                "/tmp": ""
            },
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Name": "overlay2",
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/f9ff5eb06054344dc4384c69bb6c484a8ca5deab800388c4eceae771e0ddbb8c-init/diff:/var/lib/docker/overlay2/2688e804d71914c4346a84cf57b11eac5b9abef5f025536a23c1679c9da57ed0/diff:/var/lib/docker/overlay2/e370ec1ed4ef7ccca5315c28e533c4fa2a43f9a12b8de2f2587a4d6774a9365a/diff:/var/lib/docker/overlay2/a6f4b0c8404231c22eefb9e4213cae2a0313fa64f84f0d6a53818d6147edd07e/diff:/var/lib/docker/overlay2/adb3509d76bbc58f18be9906aa86838668d1a01cb75a1b2c1bbf00f47108a329/diff:/var/lib/docker/overlay2/4d125af0926b2e1b41a83b957cdeb4d7514a85d1f1f60023ab96490a498c5513/diff:/var/lib/docker/overlay2/7416d3c54046088f181d8fd97d6221e7caa58dce25573fae94492208bb5caf7f/diff:/var/lib/docker/overlay2/17c270c3e6e3ee5ebb028ad473f0be2ad16d1a4e6672825538ee571ba6e08928/diff:/var/lib/docker/overlay2/6a207fe234056c0fdeffaf8b6e15078dd323d23a958b9ea5ee105408456ef8c0/diff:/var/lib/docker/overlay2/f5ab49096e021f218e0bf1f0ced280b1d8e0a3a10083367d066571189035eec0/diff:/var/lib/docker/overlay2/12f0a0c2eddeaef6ad2e588db69ffbd64c9a92e11ebeda69d9925704e62afec3/diff:/var/lib/docker/overlay2/d215158844079d5f429d0c5b487717a9f81a9f2596d6cceccd98f8fbe05632c5/diff",
                "MergedDir": "/var/lib/docker/overlay2/f9ff5eb06054344dc4384c69bb6c484a8ca5deab800388c4eceae771e0ddbb8c/merged",
                "UpperDir": "/var/lib/docker/overlay2/f9ff5eb06054344dc4384c69bb6c484a8ca5deab800388c4eceae771e0ddbb8c/diff",
                "WorkDir": "/var/lib/docker/overlay2/f9ff5eb06054344dc4384c69bb6c484a8ca5deab800388c4eceae771e0ddbb8c/work"
            }
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/lib/modules",
                "Destination": "/lib/modules",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "volume",
                "Name": "98f94e413d7ddd0b29cff2dcf2b90732f9c201e80936615e41838b967aafa852",
                "Source": "/var/lib/docker/volumes/98f94e413d7ddd0b29cff2dcf2b90732f9c201e80936615e41838b967aafa852/_data",
                "Destination": "/var/lib/docker",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "kind-control-plane",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "40985/tcp": {},
                "6443/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "container=docker"
            ],
            "Cmd": [
                "/sbin/init"
            ],
            "Image": "kindest/node:v1.13.3@sha256:f51da441e32b69363f0979753b3e045ff7d17f7f3dbff38ca9a39ca8d920bdf1",
            "Volumes": {
                "/var/lib/docker": {}
            },
            "WorkingDir": "",
            "Entrypoint": [
                "/usr/local/bin/entrypoint"
            ],
            "OnBuild": null,
            "Labels": {
                "io.k8s.sigs.kind.build": "2019-02-22T16:15:45.426554684-08:00",
                "io.k8s.sigs.kind.cluster": "kind",
                "io.k8s.sigs.kind.role": "control-plane"
            },
            "StopSignal": "SIGRTMIN+3"
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "5de07ebf7397c3d2ad336f72aec14742dacfc8159e3021bd9d4e3f14bad0080f",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "40985/tcp": null,
                "6443/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "40985"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/5de07ebf7397",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "4c1692170318e4cacb272b63baa6598174e0d0c024b1197f3796600a80b0a4b4",
            "Gateway": "169.254.123.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "169.254.123.2",
            "IPPrefixLen": 24,
            "IPv6Gateway": "",
            "MacAddress": "02:42:a9:fe:7b:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "ef8abf6b5526e248d8bbd64060b3d4870c156fed450132e16e9b248ac60a77c2",
                    "EndpointID": "4c1692170318e4cacb272b63baa6598174e0d0c024b1197f3796600a80b0a4b4",
                    "Gateway": "169.254.123.1",
                    "IPAddress": "169.254.123.2",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:a9:fe:7b:02"
                }
            }
        }
    }
]
##### ldir/kind-control-plane/kubernetes-version.txt
v1.13.3##### ldir/kind-control-plane/kubelet.log
-- Logs begin at Wed 2019-03-13 13:44:34 UTC, end at Wed 2019-03-13 13:44:45 UTC. --
Mar 13 13:44:34 kind-control-plane systemd[1]: Started kubelet: The Kubernetes Node Agent.
Mar 13 13:44:35 kind-control-plane kubelet[75]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 13 13:44:35 kind-control-plane kubelet[75]: F0313 13:44:35.236760      75 server.go:189] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Mar 13 13:44:35 kind-control-plane systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 13 13:44:35 kind-control-plane systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 13 13:44:45 kind-control-plane systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Mar 13 13:44:45 kind-control-plane systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Mar 13 13:44:45 kind-control-plane systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Mar 13 13:44:45 kind-control-plane systemd[1]: Started kubelet: The Kubernetes Node Agent.
Mar 13 13:44:45 kind-control-plane kubelet[464]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 13 13:44:45 kind-control-plane kubelet[464]: F0313 13:44:45.457729     464 server.go:189] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Mar 13 13:44:45 kind-control-plane systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 13 13:44:45 kind-control-plane systemd[1]: kubelet.service: Failed with result 'exit-code'.
##### ldir/kind-control-plane/docker.log
-- Logs begin at Wed 2019-03-13 13:44:34 UTC, end at Wed 2019-03-13 13:44:45 UTC. --
Mar 13 13:44:34 kind-control-plane systemd[1]: Starting Docker Application Container Engine...
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.243126034Z" level=info msg="libcontainerd: started new docker-containerd process" pid=101
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.243944072Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.244106090Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.244294463Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.244432997Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.244730710Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201f3f30, CONNECTING" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="starting containerd" revision=468a545b9edcd5932818eb9de8e72413e616e86e version=v1.1.2
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.14.65+\n": exit status 1"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.14.65+\n": exit status 1"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35Z" level=info msg="containerd successfully booted in 0.025973s"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.359617843Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201f3f30, READY" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.364235567Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.364266224Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.364339244Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.364381648Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.364436862Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420342870, CONNECTING" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.366022689Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420342870, READY" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.419723384Z" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.421165912Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.421557101Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.421834305Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.421984914Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.422178872Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201b9370, CONNECTING" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.422593461Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201b9370, READY" module=grpc
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.423097413Z" level=info msg="Loading containers: start."
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.600548997Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.706633047Z" level=info msg="Loading containers: done."
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.769053756Z" level=info msg="Docker daemon" commit=6d37f41 graphdriver(s)=overlay2 version=18.06.2-ce
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.769382962Z" level=info msg="Daemon has completed initialization"
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.790818335Z" level=warning msg="Could not register builder git source: failed to find git binary: exec: \"git\": executable file not found in $PATH"
Mar 13 13:44:35 kind-control-plane systemd[1]: Started Docker Application Container Engine.
Mar 13 13:44:35 kind-control-plane dockerd[74]: time="2019-03-13T13:44:35.806381178Z" level=info msg="API listen on /var/run/docker.sock"
neolit123 commented 5 years ago

please pass --loglevel=debug to kind create .... and show the log from the terminal.

BenTheElder commented 5 years ago

The git / dockerd is not an issue.

Kubelet not seeing a config file initially is normal, with kubeadm you let systemd crashloop kubelet until the config file is written.

Which executor are you using? Can you run kind create cluster with --loglevel=debug?

swachter commented 5 years ago

This is the debug output:

time="16:08:24" level=debug msg="Running: /usr/local/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\\t{{.Label \"io.k8s.sigs.kind.cluster\"}}]"
Creating cluster "kind" ...
 • Ensuring node image (kindest/node:v1.13.3) 🖼  ...
time="16:08:24" level=debug msg="Running: /usr/local/bin/docker [docker inspect --type=image kindest/node:v1.13.3]"
time="16:08:24" level=info msg="Image: kindest/node:v1.13.3 present locally"
 ✓ Ensuring node image (kindest/node:v1.13.3) 🖼
 • Preparing nodes 📦  ...
time="16:08:24" level=debug msg="Running: /usr/local/bin/docker [docker info --format '{{json .SecurityOptions}}']"
time="16:08:25" level=debug msg="Running: /usr/local/bin/docker [docker run -d --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kind-control-plane --name kind-control-plane --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=control-plane --entrypoint=/usr/local/bin/entrypoint --expose 38457 -p 38457:6443 kindest/node:v1.13.3@sha256:f51da441e32b69363f0979753b3e045ff7d17f7f3dbff38ca9a39ca8d920bdf1 /sbin/init]"
time="16:08:25" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane rm -f /etc/machine-id]"
time="16:08:25" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane systemd-machine-id-setup]"
time="16:08:25" level=debug msg="Running: /usr/local/bin/docker [docker info --format '{{json .SecurityOptions}}']"
time="16:08:25" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane mount -o remount,ro /sys]"
time="16:08:26" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /]"
time="16:08:26" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /run]"
time="16:08:26" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /var/lib/docker]"
time="16:08:26" level=debug msg="Running: /usr/local/bin/docker [docker kill -s SIGUSR1 kind-control-plane]"
time="16:08:26" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:26" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:26" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:26" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:26" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:27" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:27" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:27" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:27" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:27" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:27" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:27" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:27" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:27" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:28" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]"
time="16:08:28" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane /bin/bash -c find /kind/images -name *.tar -print0 | xargs -0 -n 1 -P $(nproc) docker load -i]"
time="16:08:44" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane cat /kind/version]"
 ✓ Preparing nodes 📦
time="16:08:44" level=debug msg="Running: /usr/local/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\\t{{.Label \"io.k8s.sigs.kind.cluster\"}} --filter label=io.k8s.sigs.kind.cluster=kind]"
time="16:08:44" level=debug msg="Running: /usr/local/bin/docker [docker inspect -f {{index .Config.Labels \"io.k8s.sigs.kind.role\"}} kind-control-plane]"
 • Creating kubeadm config 📜  ...
time="16:08:44" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane cat /kind/version]"
time="16:08:44" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane mkdir -p /kind]"
time="16:08:44" level=debug msg="Running: /usr/local/bin/docker [docker cp /tmp/792342548 kind-control-plane:/kind/kubeadm.conf]"
 ✓ Creating kubeadm config 📜
 • Starting control-plane 🕹️  ...
time="16:08:45" level=debug msg="Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]"
Error: failed to create cluster: failed to init node with kubeadm: exit status 1
time="16:08:46" level=debug msg="I0313 16:08:46.476414     601 initconfiguration.go:169] loading configuration from the given file\nW0313 16:08:46.477271     601 common.go:86] WARNING: Detected resource kinds that may not apply: [InitConfiguration JoinConfiguration]\nW0313 16:08:46.478091     601 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"ClusterConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0313 16:08:46.480499     601 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"InitConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0313 16:08:46.481351     601 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta1\", Kind:\"JoinConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\n[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta1, Kind=JoinConfiguration\nW0313 16:08:46.481931     601 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubelet.config.k8s.io\", Version:\"v1beta1\", Kind:\"KubeletConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nW0313 16:08:46.482923     601 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeproxy.config.k8s.io\", Version:\"v1alpha1\", Kind:\"KubeProxyConfiguration\"}: error unmarshaling JSON: while decoding JSON: json: unknown field \"metadata\"\nI0313 16:08:46.507932     601 interface.go:384] Looking for default routes with IPv4 addresses\nI0313 16:08:46.507990     601 interface.go:389] Default route transits interface \"eth0\"\nI0313 16:08:46.508266     601 interface.go:196] Interface eth0 is up\nI0313 16:08:46.508449     601 interface.go:244] Interface \"eth0\" has 2 addresses :[169.254.123.2/24 fe80::42:a9ff:fefe:7b02/64].\nI0313 16:08:46.508497     601 interface.go:211] Checking addr  169.254.123.2/24.\nI0313 16:08:46.508540     601 interface.go:221] Non-global unicast address found 169.254.123.2\nI0313 16:08:46.508565     601 interface.go:211] Checking addr  fe80::42:a9ff:fefe:7b02/64.\nI0313 16:08:46.508590     601 interface.go:224] fe80::42:a9ff:fefe:7b02 is not an IPv4 address\nI0313 16:08:46.508601     601 interface.go:384] Looking for default routes with IPv6 addresses\nI0313 16:08:46.508608     601 interface.go:400] No active IP found by looking at default routes\nunable to select an IP from default routes."
 ✗ Starting control-plane 🕹️
cluster creation failed

There seems to be a problem in finding an IP No active IP found by looking at default routes\nunable to select an IP from default routes..

swachter commented 5 years ago

Which executor are you using?

We use the Kubernetes Executor (https://docs.gitlab.com/runner/executors/kubernetes.html#exposing-varrundockersock) with the docker.sock exposed from the node to the Kubernetes runner and then in turn to the Kubernetes Executor pod.

BenTheElder commented 5 years ago

ahh, I remember some previous issues with this. We need docs for it.

see also https://github.com/kubernetes-sigs/kind/issues/303 though that assumes docker-in-docker as opposed to mounting the host socket.

swachter commented 5 years ago

Thanks for that information! As of now I see 3 possibilities for running kind in a gitlab-ci build stage:

  1. Use the node docker via the host socket (this is my current attempt).
  2. Add docker-in-docker as a service to the build stage pod, i.e. dind runs in a separate container in the build pod. Docker is addressed via tcp://localhost:2375/
  3. Use a build stage image that is derived from the dind image.

Which option is the preferable one? Is the third option what was used in #303?

BenTheElder commented 5 years ago

I'm not quite familiar enough with gitlab to know which is preferable unfortunately, perhaps @munnerz is.

I think 2.) mostly works but either requires scheduling your workload that talks to the cluster as a container in that dind, or fixing up the network + kubeconfig to properly point at the cluster since the network will be inside that dind. 1.) may have similar issues.

3.) sounds more like what we currently do on prow (Kubernetes CI).

swachter commented 5 years ago

I successfully followed option 3. Thank you for your help!

For reference:

I created a Docker image based on dind, added kubectl, helm, kind, and a saved kind node image that is preloaded with some images (tiller, postgres, openfaas). After I added the two mounts mentioned in #303 the setup worked. A tricky part was that the docker daemon had to be started manually in the job script because gitlab-ci does not use the image's entrypoint. I.e. the script started with:

    - dockerd-entrypoint.sh --storage-driver=overlay2 > /dev/null 2>&1 &

(overlay2 is the storage driver that the underlying gitlab kubernetes runner is using.)

== Addendum ==

In the meantime we use option 2, i.e. our Gitlab-Ci-Job has a dind service that is used to create a kind cluster. - Works nicely! Many thanks to the kind team!

machine424 commented 5 years ago

I run into a similar problem, I'm using dind as a service, the cluster is created but the master doesn't get ready, it's a problem with the CNI i think, tried on kind 0.2.1 and 0.2.0 gave as logs (only the tail) :

DEBU[11:10:53] Running: /usr/local/bin/docker [docker inspect -f {{(index (index .NetworkSettings.Ports "6443/tcp") 0).HostPort}} kind-control-plane] 
DEBU[11:10:53] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane cat /etc/kubernetes/admin.conf] 
DEBU[11:10:53] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane test -f /kind/manifests/default-cni.yaml] 
DEBU[11:10:53] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f /kind/manifests/default-cni.yaml] 
DEBU[11:10:54] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes --all node-role.kubernetes.io/master-] 
DEBU[11:10:54] Running: /usr/local/bin/docker [docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -] 
 ✓ Starting control-plane 🕹️ 
Cluster creation complete. You can now use the cluster with:

export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
/ # kubectl get nodes
NAME                 STATUS     ROLES    AGE     VERSION
kind-control-plane   NotReady   master   3m40s   v1.13.4
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 28 Mar 2019 11:13:50 +0000   Thu, 28 Mar 2019 11:09:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 28 Mar 2019 11:13:50 +0000   Thu, 28 Mar 2019 11:09:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 28 Mar 2019 11:13:50 +0000   Thu, 28 Mar 2019 11:09:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 28 Mar 2019 11:13:50 +0000   Thu, 28 Mar 2019 11:09:43 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

At the end of docker-info.txt, I get

 WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
         Access to the remote API is equivalent to root access on the host. Refer
         to the 'Docker daemon attack surface' section in the documentation for
         more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

When I tried with kind 0.0.1, the cluster is not created due to:

 ✗ [kind-1-control-plane] Starting Kubernetes (this may take a minute) ☸ 
Error: failed to create cluster: failed to apply overlay network: exit status 1
neolit123 commented 5 years ago

Error: failed to create cluster: failed to apply overlay network: exit status 1

please enable more logs with --loglevel=debug

machine424 commented 5 years ago

Using kind 0.0.1, I get

kind create cluster --image=kindest/node:v1.13.3@sha256:d1af504f20f3450ccb7aed63b67ec61c156f9ed3e8b0d973b3dee3c95991753c --loglevel=debug
Creating cluster 'kind-1' ...
DEBU[12:58:30] Running: /usr/local/bin/docker [docker inspect --type=image kindest/node:v1.13.3@sha256:d1af504f20f3450ccb7aed63b67ec61c156f9ed3e8b0d973b3dee3c95991753c] 
INFO[12:58:30] Image: kindest/node:v1.13.3@sha256:d1af504f20f3450ccb7aed63b67ec61c156f9ed3e8b0d973b3dee3c95991753c present locally 
 ✓ Ensuring node image (kindest/node:v1.13.3) 🖼
DEBU[12:58:30] Running: /usr/local/bin/docker [docker run -d --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kind-1-control-plane --name kind-1-control-plane --label io.k8s.sigs.kind.cluster=1 --expose 42739 -p 42739:42739 --entrypoint=/usr/local/bin/entrypoint kindest/node:v1.13.3@sha256:d1af504f20f3450ccb7aed63b67ec61c156f9ed3e8b0d973b3dee3c95991753c /sbin/init] 
 ✓ [kind-1-control-plane] Creating node container 📦 
DEBU[12:58:31] Running: /usr/local/bin/docker [docker exec --privileged kind-1-control-plane mount -o remount,ro /sys] 
DEBU[12:58:31] Running: /usr/local/bin/docker [docker exec --privileged kind-1-control-plane mount --make-shared /] 
DEBU[12:58:31] Running: /usr/local/bin/docker [docker exec --privileged kind-1-control-plane mount --make-shared /run] 
DEBU[12:58:31] Running: /usr/local/bin/docker [docker exec --privileged kind-1-control-plane mount --make-shared /var/lib/docker] 
 ✓ [kind-1-control-plane] Fixing mounts 🗻 
DEBU[12:58:31] Running: /usr/local/bin/docker [docker kill -s SIGUSR1 kind-1-control-plane] 
 ✓ [kind-1-control-plane] Starting systemd 🖥 
DEBU[12:58:31] Running: /usr/local/bin/docker [docker exec --privileged -t kind-1-control-plane systemctl is-active docker] 
DEBU[12:58:31] Running: /usr/local/bin/docker [docker exec --privileged -t kind-1-control-plane systemctl is-active docker] 
DEBU[12:58:32] Running: /usr/local/bin/docker [docker exec --privileged -t kind-1-control-plane systemctl is-active docker] 
DEBU[12:58:32] Running: /usr/local/bin/docker [docker exec --privileged -t kind-1-control-plane systemctl is-active docker] 
DEBU[12:58:32] Running: /usr/local/bin/docker [docker exec --privileged -t kind-1-control-plane systemctl is-active docker] 
DEBU[12:58:32] Running: /usr/local/bin/docker [docker exec --privileged kind-1-control-plane find /kind/images -name *.tar -exec docker load -i {} ;] 
DEBU[12:58:41] Running: /usr/local/bin/docker [docker exec --privileged -t kind-1-control-plane cat /kind/version] 
INFO[12:58:41] Using KubeadmConfig:or docker to be ready 🐋 

apiServerCertSANs:
- localhost
apiServerExtraVolumes:
- hostPath: /etc/nsswitch.conf
  mountPath: /etc/nsswitch.conf
  name: nsswitch
  pathType: FileOrCreate
  writeable: false
apiVersion: kubeadm.k8s.io/v1alpha3
clusterName: kind-1
kind: ClusterConfiguration
kubernetesVersion: v1.13.3
---
apiEndpoint:
  bindPort: 42739
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration

DEBU[12:58:41] Running: /usr/local/bin/docker [docker cp /tmp/422955009 kind-1-control-plane:/kind/kubeadm.conf] 
 ✓ [kind-1-control-plane] Waiting for docker to be ready 🐋 
DEBU[12:58:41] Running: /usr/local/bin/docker [docker exec --privileged kind-1-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf] 
DEBU[13:01:32] Running: /usr/local/bin/docker [docker exec --privileged -t kind-1-control-plane cat /etc/kubernetes/admin.conf] 
DEBU[13:01:32] Running: /usr/local/bin/docker [docker exec --privileged kind-1-control-plane /bin/sh -c kubectl apply --kubeconfig=/etc/kubernetes/admin.conf -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version --kubeconfig=/etc/kubernetes/admin.conf | base64 | tr -d '\n')"] 
DEBU[13:01:33] Running: /usr/local/bin/docker [docker exec --privileged kind-1-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes --all node-role.kubernetes.io/master-] 
DEBU[13:01:33] Running: /usr/local/bin/docker [docker exec --privileged -i kind-1-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -] 
 ✗ [kind-1-control-plane] Starting Kubernetes (this may take a minute) ☸ 
Error: failed to create cluster: failed to add default storage class: exit status 1
Usage:
  kind create cluster [flags]

Flags:
      --config string   path to a kind config file
  -h, --help            help for cluster
      --image string    node docker image to use for booting the cluster
      --name string     cluster context name (default "1")
      --retain          retain nodes for debugging when cluster creation fails

Global Flags:
      --loglevel string   logrus log level [panic, fatal, error, warning, info, debug] (default "warning")

failed to create cluster: failed to add default storage class: exit status 1
neolit123 commented 5 years ago

Using kind 0.0.1, I get

move the latest release please.

machine424 commented 5 years ago

I tried all the versions, please read the first part of my first comment (https://github.com/kubernetes-sigs/kind/issues/379#issuecomment-477555051), I said using 0.2.0 and 0.2.1 the cluster is created but it gets stuck.

aojea commented 5 years ago

@machine424 I guess that the comment was related to paste the logs of the failure when running the last version :smile: Several errors related to the storage class were fixed recently

neolit123 commented 5 years ago

yes, please post logs using --loglevel=debug from the latest kind release.

machine424 commented 5 years ago

Here we go: (I repeat, I don't know if the volumes specified in https://github.com/kubernetes-sigs/kind/issues/303 are mounted to the runner/k8s executor)

/ # kind create cluster --loglevel=debug
DEBU[09:28:10] Running: /usr/local/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}}] 
Creating cluster "kind" ...
DEBU[09:28:10] Running: /usr/local/bin/docker [docker inspect --type=image kindest/node:v1.13.4] 
INFO[09:28:10] Pulling image: kindest/node:v1.13.4 ...      
DEBU[09:28:10] Running: /usr/local/bin/docker [docker pull kindest/node:v1.13.4] 
 ✓ Ensuring node image (kindest/node:v1.13.4) 🖼 
DEBU[09:28:36] Running: /usr/local/bin/docker [docker info --format '{{json .SecurityOptions}}'] 
DEBU[09:28:36] Running: /usr/local/bin/docker [docker run -d --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kind-control-plane --name kind-control-plane --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=control-plane --entrypoint=/usr/local/bin/entrypoint --expose 43273 -p 127.0.0.1:43273:6443 kindest/node:v1.13.4 /sbin/init] 
DEBU[09:28:38] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane rm -f /etc/machine-id] 
DEBU[09:28:38] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane systemd-machine-id-setup] 
DEBU[09:28:38] Running: /usr/local/bin/docker [docker info --format '{{json .SecurityOptions}}'] 
DEBU[09:28:38] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane mount -o remount,ro /sys] 
DEBU[09:28:38] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /] 
DEBU[09:28:38] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /run] 
DEBU[09:28:38] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /var/lib/docker] 
DEBU[09:28:39] Running: /usr/local/bin/docker [docker kill -s SIGUSR1 kind-control-plane] 
DEBU[09:28:39] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
DEBU[09:28:39] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
DEBU[09:28:39] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
DEBU[09:28:39] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
DEBU[09:28:39] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
DEBU[09:28:39] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane /bin/bash -c find /kind/images -name *.tar -print0 | xargs -0 -n 1 -P $(nproc) docker load -i] 
DEBU[09:28:45] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane cat /kind/version] 
 ✓ Preparing nodes 📦 
DEBU[09:28:45] Running: /usr/local/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}} --filter label=io.k8s.sigs.kind.cluster=kind] 
DEBU[09:28:46] Running: /usr/local/bin/docker [docker inspect -f {{index .Config.Labels "io.k8s.sigs.kind.role"}} kind-control-plane] 
DEBU[09:28:46] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane cat /kind/version] 
DEBU[09:28:46] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane mkdir -p /kind] 
DEBU[09:28:46] Running: /usr/local/bin/docker [docker cp /tmp/916784549 kind-control-plane:/kind/kubeadm.conf] 
 ✓ Creating kubeadm config 📜 
DEBU[09:28:46] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6] 
DEBU[09:30:32] I0329 09:28:46.600554     596 initconfiguration.go:169] loading configuration from the given file
W0329 09:28:46.601223     596 common.go:86] WARNING: Detected resource kinds that may not apply: [InitConfiguration JoinConfiguration]
W0329 09:28:46.603458     596 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "metadata"
W0329 09:28:46.604655     596 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "metadata"
W0329 09:28:46.605806     596 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "metadata"
W0329 09:28:46.606439     596 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"JoinConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "metadata"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta1, Kind=JoinConfiguration
W0329 09:28:46.606972     596 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubelet.config.k8s.io", Version:"v1beta1", Kind:"KubeletConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "metadata"
I0329 09:28:46.607933     596 interface.go:384] Looking for default routes with IPv4 addresses
I0329 09:28:46.607965     596 interface.go:389] Default route transits interface "eth0"
I0329 09:28:46.608249     596 interface.go:196] Interface eth0 is up
I0329 09:28:46.608456     596 interface.go:244] Interface "eth0" has 1 addresses :[172.17.0.2/16].
I0329 09:28:46.608491     596 interface.go:211] Checking addr  172.17.0.2/16.
I0329 09:28:46.608508     596 interface.go:218] IP found 172.17.0.2
I0329 09:28:46.608606     596 interface.go:250] Found valid IPv4 address 172.17.0.2 for interface "eth0".
I0329 09:28:46.608637     596 interface.go:395] Found active IP 172.17.0.2 
I0329 09:28:46.608933     596 feature_gate.go:206] feature gates: &{map[]}
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
I0329 09:28:46.609261     596 checks.go:572] validating Kubernetes and kubeadm version
I0329 09:28:46.609307     596 checks.go:171] validating if the firewall is enabled and active
I0329 09:28:46.622820     596 checks.go:208] validating availability of port 6443
I0329 09:28:46.623282     596 checks.go:208] validating availability of port 10251
I0329 09:28:46.623344     596 checks.go:208] validating availability of port 10252
I0329 09:28:46.623509     596 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0329 09:28:46.623547     596 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0329 09:28:46.623563     596 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0329 09:28:46.623580     596 checks.go:283] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0329 09:28:46.623700     596 checks.go:430] validating if the connectivity type is via proxy or direct
I0329 09:28:46.623746     596 checks.go:466] validating http connectivity to first IP address in the CIDR
I0329 09:28:46.623770     596 checks.go:466] validating http connectivity to first IP address in the CIDR
I0329 09:28:46.623786     596 checks.go:104] validating the container runtime
I0329 09:28:46.711219     596 checks.go:130] validating if the service is enabled and active
I0329 09:28:46.735143     596 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
    [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
I0329 09:28:46.735387     596 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0329 09:28:46.735665     596 checks.go:644] validating whether swap is enabled or not
I0329 09:28:46.735851     596 checks.go:373] validating the presence of executable ip
I0329 09:28:46.736366     596 checks.go:373] validating the presence of executable iptables
I0329 09:28:46.736520     596 checks.go:373] validating the presence of executable mount
I0329 09:28:46.736577     596 checks.go:373] validating the presence of executable nsenter
I0329 09:28:46.736874     596 checks.go:373] validating the presence of executable ebtables
I0329 09:28:46.737157     596 checks.go:373] validating the presence of executable ethtool
I0329 09:28:46.737514     596 checks.go:373] validating the presence of executable socat
I0329 09:28:46.737780     596 checks.go:373] validating the presence of executable tc
I0329 09:28:46.738112     596 checks.go:373] validating the presence of executable touch
I0329 09:28:46.738361     596 checks.go:515] running all checks
I0329 09:28:46.766970     596 checks.go:403] checking whether the given node name is reachable using net.LookupHost
I0329 09:28:46.768032     596 checks.go:613] validating kubelet version
I0329 09:28:46.836370     596 checks.go:130] validating if the service is enabled and active
I0329 09:28:46.854747     596 checks.go:208] validating availability of port 10250
I0329 09:28:46.854961     596 checks.go:208] validating availability of port 2379
I0329 09:28:46.855111     596 checks.go:208] validating availability of port 2380
I0329 09:28:46.855232     596 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0329 09:28:46.928521     596 checks.go:833] image exists: k8s.gcr.io/kube-apiserver:v1.13.4
I0329 09:28:47.001831     596 checks.go:833] image exists: k8s.gcr.io/kube-controller-manager:v1.13.4
I0329 09:28:47.108953     596 checks.go:833] image exists: k8s.gcr.io/kube-scheduler:v1.13.4
I0329 09:28:47.214613     596 checks.go:833] image exists: k8s.gcr.io/kube-proxy:v1.13.4
I0329 09:28:47.290074     596 checks.go:833] image exists: k8s.gcr.io/pause:3.1
I0329 09:28:47.386489     596 checks.go:833] image exists: k8s.gcr.io/etcd:3.2.24
I0329 09:28:47.459805     596 checks.go:833] image exists: k8s.gcr.io/coredns:1.2.6
I0329 09:28:47.459848     596 kubelet.go:71] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0329 09:28:47.577784     596 kubelet.go:89] Starting the kubelet
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0329 09:28:47.662560     596 certs.go:113] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0329 09:28:49.028970     596 certs.go:113] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I0329 09:28:49.746148     596 certs.go:113] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0329 09:28:51.706189     596 certs.go:72] creating a new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0329 09:28:52.227093     596 kubeconfig.go:92] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0329 09:28:52.326261     596 kubeconfig.go:92] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0329 09:28:52.641533     596 kubeconfig.go:92] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0329 09:28:52.777046     596 kubeconfig.go:92] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0329 09:28:52.868272     596 manifests.go:97] [control-plane] getting StaticPodSpecs
I0329 09:28:52.876274     596 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0329 09:28:52.876301     596 manifests.go:97] [control-plane] getting StaticPodSpecs
I0329 09:28:52.878202     596 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0329 09:28:52.878245     596 manifests.go:97] [control-plane] getting StaticPodSpecs
I0329 09:28:52.879199     596 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0329 09:28:52.880051     596 local.go:60] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0329 09:28:52.880160     596 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy
I0329 09:28:52.881152     596 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0329 09:28:52.882444     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:53.383158     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:53.883465     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:54.383556     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:54.883620     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:55.383580     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:55.883400     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:56.383353     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:56.883475     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:57.383549     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:57.883637     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:58.383398     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:58.883448     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:59.383484     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:28:59.883137     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:07.488511     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 7105 milliseconds
I0329 09:29:07.886046     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 3 milliseconds
I0329 09:29:08.386609     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 3 milliseconds
I0329 09:29:08.885468     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
I0329 09:29:09.383048     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:09.883287     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:10.383662     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:10.883601     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:11.383508     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:11.883545     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:12.383711     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:12.883492     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:13.383488     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:13.883659     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:14.383368     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:14.883474     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:15.383475     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:15.883497     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:16.383310     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:16.883451     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:17.383407     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:17.883547     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:18.383838     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:18.883304     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:19.383266     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:19.883256     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:27.767811     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 7384 milliseconds
I0329 09:29:27.885523     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
I0329 09:29:28.386848     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 3 milliseconds
I0329 09:29:28.886349     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 3 milliseconds
I0329 09:29:29.385443     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
I0329 09:29:29.883192     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:30.383472     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:30.883357     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:31.383722     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:31.883491     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:32.383446     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] Initial timeout of 40s passed.
I0329 09:29:32.883634     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0329 09:29:33.383522     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:33.883735     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:34.383584     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:34.883556     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:35.383526     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:35.883420     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:36.383544     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:36.883512     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:37.383531     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:37.884553     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0329 09:29:38.383406     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:38.883542     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:39.383268     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:39.883354     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:40.383302     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:48.971243     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 8088 milliseconds
I0329 09:29:49.387114     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds
I0329 09:29:49.887419     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds
I0329 09:29:50.383235     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:50.883515     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:51.384078     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:51.883595     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:52.383541     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:52.883543     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:53.383781     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:53.883457     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:54.383541     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:54.883476     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:55.383492     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:55.883444     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:56.383689     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:56.883348     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:57.383462     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:57.883269     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:58.383674     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:58.886343     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:59.383480     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:29:59.883344     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:30:00.383127     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:30:00.883195     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:30:01.383123     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I0329 09:30:09.962625     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 8079 milliseconds
I0329 09:30:10.387276     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds
I0329 09:30:10.887516     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds
I0329 09:30:11.386316     596 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s 200 OK in 3 milliseconds
[apiclient] All control plane components are healthy after 78.504545 seconds
I0329 09:30:11.388352     596 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0329 09:30:11.389696     596 uploadconfig.go:114] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0329 09:30:11.394553     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 404 Not Found in 2 milliseconds
I0329 09:30:11.401421     596 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 5 milliseconds
I0329 09:30:11.408242     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 5 milliseconds
I0329 09:30:11.413292     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds
I0329 09:30:11.417499     596 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0329 09:30:11.418759     596 uploadconfig.go:128] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
I0329 09:30:11.422990     596 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds
I0329 09:30:11.425943     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds
I0329 09:30:11.429829     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 3 milliseconds
I0329 09:30:11.430428     596 uploadconfig.go:133] [upload-config] Preserving the CRISocket information for the control-plane node
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kind-control-plane" as an annotation
I0329 09:30:11.935727     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 4 milliseconds
I0329 09:30:11.949649     596 round_trippers.go:438] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 5 milliseconds
I0329 09:30:11.951176     596 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
I0329 09:30:15.458411     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 3005 milliseconds
I0329 09:30:15.953184     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:16.452988     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:16.953095     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:17.452979     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:17.953090     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:18.453342     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:18.953255     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:19.452946     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:19.953292     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:20.453311     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:20.953221     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:21.452824     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:21.953096     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:22.453037     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:22.953240     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:23.452864     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:23.952674     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane  in 0 milliseconds
I0329 09:30:32.557580     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 8104 milliseconds
I0329 09:30:32.576444     596 round_trippers.go:438] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 18 milliseconds
I0329 09:30:32.577748     596 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0329 09:30:32.590671     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 12 milliseconds
I0329 09:30:32.639058     596 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/secrets 201 Created in 47 milliseconds
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0329 09:30:32.656275     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 15 milliseconds
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0329 09:30:32.666481     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 9 milliseconds
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0329 09:30:32.670251     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 3 milliseconds
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0329 09:30:32.670353     596 clusterinfo.go:46] [bootstraptoken] loading admin kubeconfig
I0329 09:30:32.670941     596 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0329 09:30:32.670957     596 clusterinfo.go:54] [bootstraptoken] copying the cluster from admin.conf to the bootstrap kubeconfig
I0329 09:30:32.671273     596 clusterinfo.go:66] [bootstraptoken] creating/updating ConfigMap in kube-public namespace
I0329 09:30:32.680022     596 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 8 milliseconds
I0329 09:30:32.680346     596 clusterinfo.go:80] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0329 09:30:32.688828     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 8 milliseconds
I0329 09:30:32.694141     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 4 milliseconds
I0329 09:30:32.695379     596 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0329 09:30:32.697449     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 1 milliseconds
I0329 09:30:32.700044     596 round_trippers.go:438] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 2 milliseconds
I0329 09:30:32.703338     596 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds
I0329 09:30:32.707671     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 3 milliseconds
I0329 09:30:32.710219     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds
I0329 09:30:32.714403     596 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 3 milliseconds
I0329 09:30:32.732568     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 10 milliseconds
I0329 09:30:32.742433     596 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/services 201 Created in 8 milliseconds
[addons] Applied essential addon: CoreDNS
I0329 09:30:32.743500     596 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0329 09:30:32.747190     596 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 3 milliseconds
I0329 09:30:32.750729     596 round_trippers.go:438] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 2 milliseconds
I0329 09:30:32.767799     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 11 milliseconds
I0329 09:30:32.771564     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 2 milliseconds
I0329 09:30:32.774615     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds
I0329 09:30:32.777188     596 round_trippers.go:438] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 2 milliseconds
[addons] Applied essential addon: kube-proxy
I0329 09:30:32.778393     596 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.17.0.2:6443 --token <value withheld> --discovery-token-ca-cert-hash sha256:31ec758061a00fdfd75d6f2a804a715d146f3355411945c576635da9cc7bd08b

DEBU[09:30:32] Running: /usr/local/bin/docker [docker inspect -f {{(index (index .NetworkSettings.Ports "6443/tcp") 0).HostPort}} kind-control-plane] 
DEBU[09:30:32] Running: /usr/local/bin/docker [docker exec --privileged -t kind-control-plane cat /etc/kubernetes/admin.conf] 
DEBU[09:30:32] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane test -f /kind/manifests/default-cni.yaml] 
DEBU[09:30:33] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane kubectl create --kubeconfig=/etc/kubernetes/admin.conf -f /kind/manifests/default-cni.yaml] 
DEBU[09:30:33] Running: /usr/local/bin/docker [docker exec --privileged kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes --all node-role.kubernetes.io/master-] 
DEBU[09:30:33] Running: /usr/local/bin/docker [docker exec --privileged -i kind-control-plane kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f -] 
 ✓ Starting control-plane 🕹️ 
Cluster creation complete. You can now use the cluster with:

export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
swachter commented 5 years ago

This time the cluster creation succeeded. Maybe the reason is that you now use kindest/node:v1.13.4 instead of v1.13.3.

machine424 commented 5 years ago

Actually, as I said, the cluster is created with kind 0.2.0 and 0.2.1 but the master doesn't get ready (CNI problem), with kind 0.0.1 the cluster is not created at all, please read my first comment: https://github.com/kubernetes-sigs/kind/issues/379#issuecomment-477555051

BenTheElder commented 5 years ago

@machine424 we'll need more output to look into that, perhaps kind export logs. 0.2.1 should be significantly better behaved than 0.0.1, the CNI currently in use is weavenet.

sagikazarmark commented 5 years ago

I'm also having issues on Gitlab with dind (latest kind):

'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.\nHere is one example how you may list all Kubernetes containers running in docker:\n\t- 'docker ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'docker logs CONTAINERID'\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster"
test:
    stage: test
    image: docker:stable
    variables:
        DOCKER_HOST: tcp://docker:2375/
        DOCKER_DRIVER: overlay2
    services:
        - docker:dind
    before_script:
        - cat /etc/resolv.conf
        - cat /etc/hosts
        - cat /etc/nsswitch.conf
        - docker info
    script:
        - apk add bash curl openssl
        - wget https://github.com/kubernetes-sigs/kind/releases/download/0.2.1/kind-linux-amd64
        - chmod +x kind-linux-amd64
        - mv kind-linux-amd64 /usr/local/bin/kind
        - kind create cluster --name $CI_PIPELINE_ID --wait 180s --loglevel=debug

Any ideas?

swachter commented 5 years ago

If you are using the Kubernetes runner then you must address docker at localhost, i.e. tcp://localhost:2375.

Additionally, I think that you do not have to set the DOCKER_DRIVER environment variable because it does not influence your docker:dind service.

sagikazarmark commented 5 years ago

No, I use the Docker runner, but I wonder: maybe I don't even need kind with the Kubernetes runner, given I can access the Kubernetes cluster from the runner.

BenTheElder commented 5 years ago

That error could come from any number of things unfortunately, though we also haven't tested with remote docker yet or gitlab dind (we can't set that up on this repo due to how kubernetes manages github integrations).

I'm not sure who has the dind runner working if any. kind does work in dind, but see: https://github.com/kubernetes-sigs/kind/issues/303