kubernetes-retired / rktlet

[EOL] The rkt implementation of the Kubernetes Container Runtime Interface
Apache License 2.0
137 stars 43 forks source link

Trying to run kubeadm with rktlet #184

Closed julienpierini closed 5 years ago

julienpierini commented 6 years ago

Hi, I'm trying to run rkt with kubeadm but kubeadm is looking for docker even after kubeadm init --cri-socket /var/run/rktlet.sock.

_Install the runtime shim on every node, following the installation document in the runtime shim project listing above. Configure kubelet to use the remote CRI runtime. Please remember to change RUNTIME_ENDPOINT to your own value like /var/run/{your_runtime}.sock:

cat > /etc/systemd/system/kubelet.service.d/20-cri.conf <<EOF [Service] Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=$RUNTIMEENDPOINT" EOF systemctl daemon-reload

When i look to the logs I find this :

journalctl -u kubelet -f

-- Logs begin at Mon 2018-05-14 18:32:06 UTC. -- May 15 09:02:04 kuber-master2 kubelet[3987]: I0515 09:02:04.586405 3987 kubelet.go:556] Hairpin mode set to "hairpin-veth" May 15 09:02:04 kuber-master2 kubelet[3987]: W0515 09:02:04.586441 3987 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 15 09:02:04 kuber-master2 kubelet[3987]: I0515 09:02:04.586463 3987 client.go:75] Connecting to docker on unix:///var/run/docker.sock May 15 09:02:04 kuber-master2 kubelet[3987]: I0515 09:02:04.586474 3987 client.go:104] Start docker client with request timeout=2m0s May 15 09:02:04 kuber-master2 kubelet[3987]: E0515 09:02:04.586555 3987 kube_docker_client.go:91] failed to retrieve docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? May 15 09:02:04 kuber-master2 kubelet[3987]: W0515 09:02:04.586570 3987 kube_docker_client.go:92] Using empty version for docker client, this may sometimes cause compatibility issue. May 15 09:02:04 kuber-master2 kubelet[3987]: F0515 09:02:04.586664 3987 server.go:233] failed to run Kubelet: failed to create kubelet: failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? May 15 09:02:04 kuber-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a May 15 09:02:04 kuber-master2 systemd[1]: kubelet.service: Unit entered failed state. May 15 09:02:04 kuber-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 09:02:14 kuber-master2 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart. May 15 09:02:14 kuber-master2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent. May 15 09:02:14 kuber-master2 systemd[1]: Started kubelet: The Kubernetes Node Agent. May 15 09:02:14 kuber-master2 kubelet[4006]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:02:14 kuber-master2 kubelet[4006]: Flag --allow-privileged has been deprecated, will be removed in a future version May 15 09:02:14 kuber-master2 kubelet[4006]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:02:14 kuber-master2 kubelet[4006]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:02:14 kuber-master2 kubelet[4006]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:02:14 kuber-master2 kubelet[4006]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:02:14 kuber-master2 kubelet[4006]: Flag --cadvisor-port has been deprecated, The default will change to 0 (disabled) in 1.12, and the cadvisor port will be removed entirely in 1.13 May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.787912 4006 feature_gate.go:226] feature gates: &{{} map[]} May 15 09:02:14 kuber-master2 kubelet[4006]: W0515 09:02:14.804711 4006 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 15 09:02:14 kuber-master2 kubelet[4006]: W0515 09:02:14.808349 4006 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.808376 4006 server.go:376] Version: v1.10.2 May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.808401 4006 feature_gate.go:226] feature gates: &{{} map[]} May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.808476 4006 plugins.go:89] No cloud provider specified. May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.810890 4006 certificate_store.go:117] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.814449 4006 server.go:613] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.814643 4006 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: / May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.814656 4006 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true} May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.814744 4006 container_manager_linux.go:266] Creating device plugin manager: true May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.814768 4006 state_mem.go:36] [cpumanager] initializing new in-memory state store May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.814805 4006 state_mem.go:84] [cpumanager] updated default cpuset: "" May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.814815 4006 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.814887 4006 kubelet.go:272] Adding pod path: /etc/kubernetes/manifests May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.814915 4006 kubelet.go:297] Watching apiserver May 15 09:02:14 kuber-master2 kubelet[4006]: E0515 09:02:14.824438 4006 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://10.30.10.5:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkuber-master2&limit=500&resourceVersion=0: dial tcp 10.30.10.5:6443: getsockopt: connection refused May 15 09:02:14 kuber-master2 kubelet[4006]: E0515 09:02:14.824891 4006 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list v1.Service: Get https://10.30.10.5:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.30.10.5:6443: getsockopt: connection refused May 15 09:02:14 kuber-master2 kubelet[4006]: E0515 09:02:14.825323 4006 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://10.30.10.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkuber-master2&limit=500&resourceVersion=0: dial tcp 10.30.10.5:6443: getsockopt: connection refused May 15 09:02:14 kuber-master2 kubelet[4006]: W0515 09:02:14.845204 4006 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.845259 4006 kubelet.go:556] Hairpin mode set to "hairpin-veth" May 15 09:02:14 kuber-master2 kubelet[4006]: W0515 09:02:14.845314 4006 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.845334 4006 client.go:75] Connecting to docker on unix:///var/run/docker.sock May 15 09:02:14 kuber-master2 kubelet[4006]: I0515 09:02:14.845358 4006 client.go:104] Start docker client with request timeout=2m0s May 15 09:02:14 kuber-master2 kubelet[4006]: E0515 09:02:14.845446 4006 kube_docker_client.go:91] failed to retrieve docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? May 15 09:02:14 kuber-master2 kubelet[4006]: W0515 09:02:14.845462 4006 kube_docker_client.go:92] Using empty version for docker client, this may sometimes cause compatibility issue. May 15 09:02:14 kuber-master2 kubelet[4006]: F0515 09:02:14.845598 4006 server.go:233] failed to run Kubelet: failed to create kubelet: failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? May 15 09:02:14 kuber-master2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a May 15 09:02:14 kuber-master2 systemd[1]: kubelet.service: Unit entered failed state. May 15 09:02:14 kuber-master2 systemd[1]: kubelet.service: Failed with result 'exit-code'.

Can anyone help me ? :)

julienpierini commented 6 years ago

I have made some change :

[Unit]
Description=CRI rklet

[Service]
ExecStart=/home/cloud/rktlet/bin/rktlet --stream-server-address=127.0.0.1:10255

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

I have changed the ip and the port

Now I got this message after kubeadm init --cri-socket /var/run/rktlet.sock :

[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' returned HTTP code 404

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 5 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 5 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-incubator/rktlet/issues/184#issuecomment-504677013): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.