Closed PierrickLozach closed 6 years ago
hello @PierrickI3
do you have connectivity to:
https://192.168.1.19:6443
I don't, but it's my local IP address:
[pierrick@kubernetes ~]$ curl https://192.168.1.19:6443
curl: (7) Failed connect to 192.168.1.19:6443; Connection refused
[pierrick@kubernetes ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fc:aa:14:9a:97:e4 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.19/24 brd 192.168.1.255 scope global noprefixroute dynamic eno1
valid_lft 86196sec preferred_lft 86196sec
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:4e:90:66:f7 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
6443 is the secure port of the API server. is the API server running?
what is the output of kubectl get pods --all-namespaces
i haven't seen a failure right after Watching apiserver
...
do you happen to have a firewall blocking 6443? also, can you please share the manifest of your API server? hide any sensitive data, where needed.
Here is the output from kubectl:
[pierrick@kubernetes ~]$ kubectl get pods --all-namespaces
The connection to the server 192.168.1.19:6443 was refused - did you specify the right host or port?
I don't have any firewall blocking this port and I ssh'd directly on the machine.
Here is the manifest (no sensitive data to hide as it's not exposed):
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=192.168.1.19
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --disable-admission-plugins=PersistentVolumeLabel
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver-amd64:v1.11.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.1.19
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
here is the output from docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0ab99c997dc4 272b3a60cd68 "kube-scheduler --..." 37 minutes ago Up 37 minutes k8s_kube-scheduler_kube-scheduler-kubernetes_kube-system_537879acc30dd5eff5497cb2720a6d64_8
058ee146b06f 52096ee87d0e "kube-controller-m..." 37 minutes ago Up 37 minutes k8s_kube-controller-manager_kube-controller-manager-kubernetes_kube-system_0da157f6a48b9a49830c56c19af5c954_0
dca22f2e66c1 k8s.gcr.io/pause:3.1 "/pause" 37 minutes ago Up 37 minutes k8s_POD_kube-scheduler-kubernetes_kube-system_537879acc30dd5eff5497cb2720a6d64_8
301d6736b6ad k8s.gcr.io/pause:3.1 "/pause" 37 minutes ago Up 37 minutes k8s_POD_kube-controller-manager-kubernetes_kube-system_0da157f6a48b9a49830c56c19af5c954_0
10cf027f301b k8s.gcr.io/pause:3.1 "/pause" 37 minutes ago Up 37 minutes k8s_POD_kube-apiserver-kubernetes_kube-system_b784b670ba660d7fe4b0407690d68d81_4
c7c59c971636 k8s.gcr.io/pause:3.1 "/pause" 37 minutes ago Up 37 minutes k8s_POD_etcd-kubernetes_kube-system_6fd4d3c9fe373df920ce5e1e4572fd1d_8
--disable-admission-plugins=PersistentVolumeLabel
this one should be deprecated and disabled by default in 1.11.1, did you adapt an old kubeadm config? (edit: oh, wait we didn't cherry pick this: https://github.com/kubernetes/kubernetes/pull/65827)
what are the contents of your kubeadm config (again, please hide sensitive data where needed)?
How can I retrieve the kubeadm config? Running kubeadm config view
results in the same connection refused
error.
To deploy, I followed the kubeadm instructions. Here is what I executed:
# update yum packages
yum update -y
# install git, wget & docker
yum install -y git wget nano go docker
# install CRI
rpm --import https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO
curl -s https://mirror.go-repo.io/centos/go-repo.repo | tee /etc/yum.repos.d/go-repo.repo
yum update -y golang
# start Docker
systemctl enable docker && systemctl start docker
# disable swap (not supported by kubeadm)
swapoff -a
# add kubernetes repo to yum
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0 # required to allow containers to access the host filesystem (https://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-enable-disable-enforcement.html). To disable permanently: https://www.tecmint.com/disable-selinux-temporarily-permanently-in-centos-rhel-fedora/
# disable firewall (I know, not great but I am fed up with opening ports and I am behind another firewall and I can do whatever I want)
systemctl disable firewalld && systemctl stop firewalld
###########
# KUBEADM #
###########
# install kubelet, kubeadm and kubectl
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
# prevent issuers with traffic being routed incorrectly due to iptables being bypassed
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
# install CRICTL (https://github.com/kubernetes-incubator/cri-tools), required by kubeadm
go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
# deploy kubernetes
kubeadm init --pod-network-cidr=10.244.0.0/16
# allow kubectl for non sudoers (run this as a regular user)
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc
# For the root user, run this:
export KUBECONFIG=/etc/kubernetes/admin.conf
echo 'KUBECONFIG=/etc/kubernetes/admin.conf' >> $HOME/.bashrc
# deploy pod network (flannel)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl taint nodes --all node-role.kubernetes.io/master- # allow pods to be scheduled on master
###################
# REBOOTING ISSUE #
###################
# At the time of writing this, rebooting causes kubernetes to no longer work. This will fix it (http://stytex.de/blog/2018/01/16/how-to-recover-self-hosted-kubeadm-kubernetes-cluster-after-reboot/)
git clone https://github.com/xetys/k8s-self-hosted-recovery
cd k8s-self-hosted-recovery
chmod +x install.sh
./install.sh
cd ..
kubeadm init --pod-network-cidr=10.244.0.0/16
in this case you are not passing --config
so your config is pretty much:
kubeadm config print-default
+ the CIDR change.
what happens if you use a different CNI? (i know that it worked before reboot but just testing...).
first this without CIDR:
# deploy kubernetes
kubeadm init
then:
# deploy pod network (weave)
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl taint nodes --all node-role.kubernetes.io/master- # allow pods to be scheduled on master
edit: play with removing the taint line:
kubectl taint nodes --all node-role.kubernetes.io/master- ...
should I run kubeadm reset
before trying again? or simply run kubeadm init
and then load the CIDR?
should I run kubeadm reset before trying again?
yes, unless you have are reason to not want to?
I ran kubeadm reset
and then an error happened after [init] this might take a minute or longer if the control plane images have to be pulled
:
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0727 16:36:29.663874 26847 kernel_validator.go:81] Validating kernel version
I0727 16:36:29.664814 26847 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubernetes kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kubernetes localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubernetes localhost] and IPs [192.168.1.19 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-amd64:v1.11.1
- k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
- k8s.gcr.io/kube-scheduler-amd64:v1.11.1
- k8s.gcr.io/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
Here is the kubeadm reset
output:
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
Unfortunately, an error has occurred: timed out waiting for the condition
i've been seeing this the past few days from users, for some reason....
can you pre-pull the images using:
kubeadm config images pull --kubernetes-version 1.11.1
then try init
again.
if it fails again what does journalctl
says about the kubelet?
here is the output from kubeadm config images pull --kubernetes-version 1.11.1
:
[root@kubernetes ~]# kubeadm config images pull --kubernetes-version 1.11.1
[config/images] Pulled k8s.gcr.io/kube-apiserver-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-scheduler-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/kube-proxy-amd64:v1.11.1
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd-amd64:3.2.18
[config/images] Pulled k8s.gcr.io/coredns:1.1.3
Then ran kubeadm reset
:
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
Then kubeadm init
:
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0727 17:02:09.000020 3408 kernel_validator.go:81] Validating kernel version
I0727 17:02:09.000209 3408 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubernetes kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kubernetes localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kubernetes localhost] and IPs [192.168.1.19 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-amd64:v1.11.1
- k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
- k8s.gcr.io/kube-scheduler-amd64:v1.11.1
- k8s.gcr.io/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
journalctl last entries:
Jul 27 17:03:00 kubernetes kubelet[3636]: I0727 17:03:00.604819 3636 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"
Jul 27 17:03:00 kubernetes kubelet[3636]: I0727 17:03:00.604946 3636 kuberuntime_manager.go:767] Back-off 20s restarting failed container=etcd pod=etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)
Jul 27 17:03:00 kubernetes kubelet[3636]: E0727 17:03:00.604982 3636 pod_workers.go:186] Error syncing pod 6fd4d3c9fe373df920ce5e1e4572fd1d ("etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 20s restarting failed container=etcd pod=etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)"
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.175841 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.179832 3636 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.180286 3636 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.414597 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.415518 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: E0727 17:03:01.416549 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.428500 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.732777 3636 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.1 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=192.168.1.19 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:etc-pki ReadOnly:true MountPath:/etc/pki SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.1.19,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 27 17:03:01 kubernetes kubelet[3636]: I0727 17:03:01.732893 3636 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.325269 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.325333 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:02 kubernetes kubelet[3636]: W0727 17:03:02.329421 3636 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.415253 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.416228 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.417254 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629551 3636 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.1 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=192.168.1.19 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:etc-pki ReadOnly:true MountPath:/etc/pki SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.1.19,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629669 3636 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:02 kubernetes kubelet[3636]: I0727 17:03:02.629850 3636 kuberuntime_manager.go:767] Back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)
Jul 27 17:03:02 kubernetes kubelet[3636]: E0727 17:03:02.629891 3636 pod_workers.go:186] Error syncing pod 32544bee4c007108f4b6c54da83cc67e ("kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)"
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.415923 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.416815 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:03 kubernetes kubelet[3636]: E0727 17:03:03.417995 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.416640 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.417583 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:04 kubernetes kubelet[3636]: E0727 17:03:04.418535 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.417293 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.418283 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.419337 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:05 kubernetes kubelet[3636]: W0727 17:03:05.511995 3636 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:03:05 kubernetes kubelet[3636]: E0727 17:03:05.512167 3636 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:03:06 kubernetes kubelet[3636]: I0727 17:03:06.396897 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:06 kubernetes kubelet[3636]: W0727 17:03:06.401380 3636 status_manager.go:482] Failed to get status for pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.417876 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.418801 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: E0727 17:03:06.419836 3636 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:03:06 kubernetes kubelet[3636]: I0727 17:03:06.577952 3636 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:03:06 kubernetes kubelet[3636]: W0727 17:03:06.582165 3636 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
I am not able to add any network pod yet.
I am not able to add any network pod yet.
yes, this is failing earlier.
pull images
working implies that you have connectivity to the gcr.io
bucket.
please, restart the kubelet manually and see what the logs show:
systemctl restart kubelet
systemctl status kubelet # <---- ?
journalctl -xeu kubelet # <---- ?
i'm running out of ideas.
systemctl restart kubelet
systemctl status kubelet
:● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2018-07-27 17:22:15 CEST; 6s ago
Docs: http://kubernetes.io/docs/
Main PID: 10401 (kubelet)
CGroup: /system.slice/kubelet.service
└─10401 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network...
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.636513 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.482761 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&li...: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.483784 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkuberne...: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.484750 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.483505 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&li...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.484300 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkuberne...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.485469 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp...: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: W0727 17:22:20.623173 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.623379 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.058116 10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Hint: Some lines were ellipsized, use -l to show in full.
journalctl -xeu kubelet | less:
-- Subject: Unit kubelet.service has begun shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun shutting down.
Jul 27 17:22:15 kubernetes systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Jul 27 17:22:15 kubernetes systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Jul 27 17:22:15 kubernetes kubelet[10401]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informati
on.
Jul 27 17:22:15 kubernetes kubelet[10401]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informati
on.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.439279 10401 server.go:408] Version: v1.11.1
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.439636 10401 plugins.go:97] No cloud provider specified.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.443829 10401 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478490 10401 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478906 10401 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.478931 10401 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479017 10401 container_manager_linux.go:267] Creating device plugin manager: true
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479052 10401 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479103 10401 state_mem.go:84] [cpumanager] updated default cpuset: ""
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479113 10401 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479206 10401 kubelet.go:274] Adding pod path: /etc/kubernetes/manifests
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.479256 10401 kubelet.go:299] Watching apiserver
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.479973 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.480000 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.480053 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.484688 10401 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.484709 10401 client.go:104] Start docker client with request timeout=2m0s
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.486438 10401 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.486467 10401 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.486599 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.488739 10401 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.488845 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.488882 10401 docker_service.go:253] Docker cri networking managed by cni
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.499501 10401 docker_service.go:258] Docker Info: &{ID:V36L:ETJO:IECX:PJF4:G3GB:JHA6:LGCF:VQBJ:D2GY:PVFO:567O:545Y Containers:8 ContainersRunning:6 ContainersPaused:0 ContainersStopped:2 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:47 SystemTime:2018-07-27T17:22:15.493615517+02:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-862.9.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420f3e000 NCPU:12 MemTotal:33386934272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]} runc:{Path:docker-runc Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc420f48000} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.499629 10401 docker_service.go:271] Setting cgroupDriver to systemd
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.510960 10401 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.511524 10401 csi_plugin.go:111] kubernetes.io/csi: plugin initializing...
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.512249 10401 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.512297 10401 kubelet.go:1261] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.512389 10401 server.go:986] Started kubelet
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.512717 10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513432 10401 server.go:302] Adding debug handlers to kubelet server.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513637 10401 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513742 10401 status_manager.go:152] Starting to sync pod status with apiserver
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513810 10401 kubelet.go:1758] Starting kubelet main sync loop.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.513901 10401 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.516479 10401 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.516557 10401 desired_state_of_world_populator.go:130] Desired state populator starts to run
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.518992 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.519170 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557259 10401 container.go:393] Failed to create summary reader for "/system.slice/docker.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557505 10401 container.go:393] Failed to create summary reader for "/system.slice/irqbalance.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.557765 10401 container.go:393] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.561683 10401 container.go:393] Failed to create summary reader for "/system.slice/k8s-self-hosted-recover.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.565386 10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.565666 10401 container.go:393] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.567498 10401 container.go:393] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.567815 10401 container.go:393] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.568113 10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.568957 10401 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.569544 10401 container.go:393] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.572074 10401 container.go:393] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.572295 10401 container.go:393] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.575556 10401 container.go:393] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.580473 10401 container.go:393] Failed to create summary reader for "/system.slice/crond.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.580825 10401 container.go:393] Failed to create summary reader for "/system.slice/lvm2-lvmetad.service": none of the resources are being tracked.
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.614059 10401 kubelet.go:1775] skipping pod synchronization - [container runtime is down]
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.614738 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.617620 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.618177 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.618610 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620065 10401 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620083 10401 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.620097 10401 policy_none.go:42] [cpumanager] none policy: Start
Jul 27 17:22:15 kubernetes kubelet[10401]: Starting Device Plugin manager
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621716 10401 container_manager_linux.go:792] CPUAccounting not enabled for pid: 986
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621726 10401 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 986
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621797 10401 container_manager_linux.go:792] CPUAccounting not enabled for pid: 10401
Jul 27 17:22:15 kubernetes kubelet[10401]: W0727 17:22:15.621804 10401 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 10401
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.622220 10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.818778 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:15 kubernetes kubelet[10401]: I0727 17:22:15.822804 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:15 kubernetes kubelet[10401]: E0727 17:22:15.823257 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: I0727 17:22:16.223461 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:16 kubernetes kubelet[10401]: I0727 17:22:16.227305 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.227718 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.480650 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.481646 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:16 kubernetes kubelet[10401]: E0727 17:22:16.482678 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: I0727 17:22:17.027967 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:17 kubernetes kubelet[10401]: I0727 17:22:17.031757 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.032125 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.481348 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.482224 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:17 kubernetes kubelet[10401]: E0727 17:22:17.483219 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.482059 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.483034 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.484060 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:18 kubernetes kubelet[10401]: I0727 17:22:18.632340 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:18 kubernetes kubelet[10401]: I0727 17:22:18.636096 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:18 kubernetes kubelet[10401]: E0727 17:22:18.636513 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.482761 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.483784 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:19 kubernetes kubelet[10401]: E0727 17:22:19.484750 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.483505 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.484300 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.485469 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:20 kubernetes kubelet[10401]: W0727 17:22:20.623173 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.623379 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.058116 10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.484293 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.485115 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.486258 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:21 kubernetes kubelet[10401]: I0727 17:22:21.836741 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:21 kubernetes kubelet[10401]: I0727 17:22:21.840909 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.841341 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.484956 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.485979 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:22 kubernetes kubelet[10401]: E0727 17:22:22.487497 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.485724 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.486543 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:23 kubernetes kubelet[10401]: E0727 17:22:23.488107 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.486391 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.487305 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:24 kubernetes kubelet[10401]: E0727 17:22:24.488708 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.487061 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.488108 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.489302 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.622507 10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:25 kubernetes kubelet[10401]: W0727 17:22:25.624576 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:25 kubernetes kubelet[10401]: E0727 17:22:25.624815 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.487846 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.488682 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:26 kubernetes kubelet[10401]: E0727 17:22:26.489896 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.488510 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.489607 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:27 kubernetes kubelet[10401]: E0727 17:22:27.490628 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: I0727 17:22:28.241605 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:28 kubernetes kubelet[10401]: I0727 17:22:28.245892 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.246319 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.489234 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.490271 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:28 kubernetes kubelet[10401]: E0727 17:22:28.491127 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.489955 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.490777 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:29 kubernetes kubelet[10401]: E0727 17:22:29.491942 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.490671 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.491715 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.492608 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:30 kubernetes kubelet[10401]: W0727 17:22:30.626001 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:30 kubernetes kubelet[10401]: E0727 17:22:30.626177 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.058799 10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.491375 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.492253 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:31 kubernetes kubelet[10401]: E0727 17:22:31.493403 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.492126 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.493049 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:32 kubernetes kubelet[10401]: E0727 17:22:32.494127 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.492838 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.493750 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:33 kubernetes kubelet[10401]: E0727 17:22:33.494858 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.493558 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.494514 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:34 kubernetes kubelet[10401]: E0727 17:22:34.495496 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.246559 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.251191 10401 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.251606 10401 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.493606 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.494144 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.495144 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.496164 10401 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.530217 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.530338 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.534682 10401 status_manager.go:482] Failed to get status for pod "etcd-kubernetes_kube-system(6fd4d3c9fe373df920ce5e1e4572fd1d)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/etcd-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.552571 10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/6fd4d3c9fe373df920ce5e1e4572fd1d-etcd-certs") pod "etcd-kubernetes" (UID: "6fd4d3c9fe373df920ce5e1e4572fd1d")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.552633 10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/6fd4d3c9fe373df920ce5e1e4572fd1d-etcd-data") pod "etcd-kubernetes" (UID: "6fd4d3c9fe373df920ce5e1e4572fd1d")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.563624 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.563750 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.567753 10401 status_manager.go:482] Failed to get status for pod "kube-apiserver-kubernetes_kube-system(32544bee4c007108f4b6c54da83cc67e)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.596873 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.596962 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.601046 10401 status_manager.go:482] Failed to get status for pod "kube-controller-manager-kubernetes_kube-system(fe4b0bda62e8e0df1386dc034ba16ee3)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.622704 10401 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.627154 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 27 17:22:35 kubernetes kubelet[10401]: E0727 17:22:35.627334 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.630351 10401 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 17:22:35 kubernetes kubelet[10401]: W0727 17:22:35.634273 10401 status_manager.go:482] Failed to get status for pod "kube-scheduler-kubernetes_kube-system(537879acc30dd5eff5497cb2720a6d64)": Get https://192.168.1.19:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.652980 10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/32544bee4c007108f4b6c54da83cc67e-k8s-certs") pod "kube-apiserver-kubernetes" (UID: "32544bee4c007108f4b6c54da83cc67e")
Jul 27 17:22:35 kubernetes kubelet[10401]: I0727 17:22:35.653036 10401 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/fe4b0bda62e8e0df1386dc034ba16ee3-ca-certs") pod "kube-controller-manager-kubernetes" (UID: "fe4b0bda62e8e0df1386dc034ba16ee3")
what happens if you stop everything related to kubernetes on this node and try to run a small server listening on 192.168.1.19:6443
?
kubeadm reset
systemctl stop kubelet
netstat -tulpn
to make sure the nothing is listening there.
write this to a test.go file
package main
import (
"net/http"
"strings"
)
func sayHello(w http.ResponseWriter, r *http.Request) {
message := r.URL.Path
message = strings.TrimPrefix(message, "/")
message = "Hello " + message
w.Write([]byte(message))
}
func main() {
http.HandleFunc("/", sayHello)
if err := http.ListenAndServe("192.168.1.19:6443", nil); err != nil {
panic(err)
}
}
go run test.go
$ curl 192.168.1.19:6443
does it work?
i know this is silly, but i'm out of ideas.
Output of netstat -tulpn
:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 989/sshd
tcp6 0 0 :::22 :::* LISTEN 989/sshd
udp 0 0 0.0.0.0:68 0.0.0.0:* 807/dhclient
output of curl
after executing test.go
:
[pierrick@kubernetes ~]$ curl 192.168.1.19:6443
Hello [pierrick@kubernetes ~]$
well it works! i'm sorry but i cannot help more today.
maybe someone else can have a look at the logs and figure out whats going on?
Anyone in particular you think of? Could you tag them in this issue?
@kubernetes/sig-cluster-lifecycle-bugs
@PierrickI3 can you check if app server is listening on port 6443? If it is, can you check your proxy settings (if any) ?
netstat -tulpn |grep 6443
set |grep -i proxy
@bart0sh Thanks but I had to start over this weekend so I can no longer troubleshoot this. I will close this issue.
FYI, the output of netstat -tulpn is shown above (https://github.com/kubernetes/kubeadm/issues/1026#issuecomment-408457991). I have no proxy (direct connection to the internet).
Why is this closed ? I am facing similar issue on my VirtualBox VM with Centos 7.. So the only solution was to create a new box altogether ???
@PierrickI3 I am having the same exact problem and most of the logs are similar. Did you reach a solution or workaround ??
@kheirp unfortunately not. I gave up and moved back to minikube for dev purposes. But I'd love to get back to kubeadm if a solution is found.
@PierrickI3 ok .. when I reach something then I will let u know
same problem running Ubuntu 16 on VMWare.
same problem running on RHEL 7.5
same problem running on Centos7 cluster
same with a Kubic cluster
systemctl stop kubelet systemctl stop docker iptables --flush iptables -tnat --flush systemctl start kubelet systemctl start docker
Maybe you forgot to run this after install @PierrickI3 :
sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf
We're having the exact same issues on RHEL 7. This just stopped working out of the blue, too. This is critical for us.
I have run sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf
etc., etc., but the master node is refusing connection even to all the nodes in our cluster.
i was never able to reproduce this problem, the cluster just comes back up fine after 2-3 minutes post reboot. must be something related to individual setups.
It seems as if my API server is no longer running. I've tried restarting kubelet several times, all to no avail. Funny thing is that this had been working and I haven't touched anything on it till yesterday when I was going to add more docker images for our cluster.
I figured out what was going on. The /var
mounted directory had become full. It's now working as expected.
I figured that since a rule for port 6443 was added to iptables and since I was getting a continual connection refused, even from localhost and that since doing a docker ps
yielded no running containers, that the API service (and other services) was not running, meaning that something odd was going on, and sure enough... weird thing though is nothing in the kubectl
logs indicated why the API service failed to start.
I figured out what was going on. The
/var
mounted directory had become full.
OMG THANKS!!!
systemctl status kubelet cmd show the error info "Jul 27 17:22:20 kubernetes kubelet[10401]: W0727 17:22:20.623173 10401 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d Jul 27 17:22:20 kubernetes kubelet[10401]: E0727 17:22:20.623379 10401 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Jul 27 17:22:21 kubernetes kubelet[10401]: E0727 17:22:21.058116 10401 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)"
So
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf add these params: --network-plugin=cni --cni-conf-dir=/etc/cni/ --cni-bin-dir=/opt/cni/bin and then kubeadm reset kubeadm init
try this on each node @PierrickI3
systemctl stop kubelet systemctl stop docker iptables --flush iptables -tnat --flush systemctl start kubelet systemctl start docker
This fixed my issue, although i only did this in the master node and just did kubeadm reset in other nodes
Thanks @manoj-bandara but I gave up for now and will return to this later on this year. See https://github.com/kubernetes/kubeadm/issues/1026#issuecomment-420948092
same problem here - tried a lot of things . it seems this was the problem:
failed: open /run/systemd/resolve/resolv.conf: no such file or directory
i symlinked /etc/resolv.conf to that location and restarted kubele (only on master) then the master and all nodes started to came up again.
In my case it was the swap issue, fixed it by turning off the swap
sudo swapoff -a
sudo systemctl restart kubelet.service
try this on each node @PierrickI3
systemctl stop kubelet systemctl stop docker iptables --flush iptables -tnat --flush systemctl start kubelet systemctl start docker
OMG,amazing command!!Thank u very much indeed!!
try this on each node @PierrickI3
systemctl stop kubelet systemctl stop docker iptables --flush iptables -tnat --flush systemctl start kubelet systemctl start docker
OMG,amazing command!!Thank u very much indeed!!
@JTRNEO , I have to remember this one as well :)
I faced with the same problem several installations. I haven't figured it out yet. I tried Flannel, Calico and Wave drivers and the result is the same. So probably the problem is not related with network plugins. I am not sure it's related with the problem but although I run "sudo swapoff -a", after restart the server recommends me turn off swap again.
Same issue here. My kube-apiserver logs are filled with:
I1124 18:44:13.180672 1 log.go:172] http: TLS handshake error from 192.168.1.235:56160: EOF
I1124 18:44:13.186601 1 log.go:172] http: TLS handshake error from 192.168.1.235:56244: EOF
I1124 18:44:13.201880 1 log.go:172] http: TLS handshake error from 192.168.1.192:56340: EOF
I1124 18:44:13.208991 1 log.go:172] http: TLS handshake error from 192.168.1.235:56234: EOF
I1124 18:44:13.248214 1 log.go:172] http: TLS handshake error from 192.168.1.235:56166: EOF
I1124 18:44:13.292943 1 log.go:172] http: TLS handshake error from 192.168.1.235:56272: EOF
I1124 18:44:13.332362 1 log.go:172] http: TLS handshake error from 192.168.1.235:56150: EOF
I1124 18:44:13.352911 1 log.go:172] http: TLS handshake error from 192.168.1.249:41300: EOF
The flushing ip tables didn't work for me and my swap is already off.
Did you install flannel to the worker nodes?
On Sun, Nov 24, 2019, 21:54 Christopher J. Bottaro notifications@github.com wrote:
Same issue here. My kube-apiserver logs are filled with:
I1124 18:44:13.180672 1 log.go:172] http: TLS handshake error from 192.168.1.235:56160: EOF I1124 18:44:13.186601 1 log.go:172] http: TLS handshake error from 192.168.1.235:56244: EOF I1124 18:44:13.201880 1 log.go:172] http: TLS handshake error from 192.168.1.192:56340: EOF I1124 18:44:13.208991 1 log.go:172] http: TLS handshake error from 192.168.1.235:56234: EOF I1124 18:44:13.248214 1 log.go:172] http: TLS handshake error from 192.168.1.235:56166: EOF I1124 18:44:13.292943 1 log.go:172] http: TLS handshake error from 192.168.1.235:56272: EOF I1124 18:44:13.332362 1 log.go:172] http: TLS handshake error from 192.168.1.235:56150: EOF I1124 18:44:13.352911 1 log.go:172] http: TLS handshake error from 192.168.1.249:41300: EOF
The flushing ip tables didn't work for me and my swap is already off.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubeadm/issues/1026?email_source=notifications&email_token=AKZB2S2OOQWTN5QH3NCIVNTQVLEYBA5CNFSM4FMNY3N2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFASFSY#issuecomment-557916875, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKZB2SZZTPQEVNMV77HZFSLQVLEYBANCNFSM4FMNY3NQ .
No, I'm using kube-router for networking. The pod for it is in crash backoff state, and filled with errors about not being able to talk to the api server.
The cluster was running fine for months, but then the electricity went out hence rebooting my machines and a borked cluster.
Is this a request for help?
It is but I have searched StackOverflow and googled many times without finding the issue. Also, this seems to affect more people.
What keywords did you search in kubeadm issues before filing this one?
The error messages I see in journalctl
Is this a BUG REPORT or FEATURE REQUEST?
Bug report
Versions
kubeadm version:
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
The server is also 1.11 but since it's not starting at the moment,kubectl version
won't show itCENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"
Jul 27 14:46:17 kubernetes systemd[1]: Starting kubelet: The Kubernetes Node Agent... -- Subject: Unit kubelet.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- Unit kubelet.service has begun starting up. Jul 27 14:46:17 kubernetes kubelet[1619]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more informatio n. Jul 27 14:46:17 kubernetes kubelet[1619]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.608612 1619 server.go:408] Version: v1.11.1 Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.609679 1619 plugins.go:97] No cloud provider specified. Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.613651 1619 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.709720 1619 server.go:648] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710299 1619 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: [] Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710322 1619 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710457 1619 container_manager_linux.go:267] Creating device plugin manager: true
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710515 1619 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710600 1619 state_mem.go:84] [cpumanager] updated default cpuset: ""
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710617 1619 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710751 1619 kubelet.go:274] Adding pod path: /etc/kubernetes/manifests
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.710814 1619 kubelet.go:299] Watching apiserver
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711655 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711661 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.711752 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.717242 1619 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.717277 1619 client.go:104] Start docker client with request timeout=2m0s
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.718726 1619 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.718756 1619 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.721656 1619 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.721975 1619 docker_service.go:253] Docker cri networking managed by cni
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.733083 1619 docker_service.go:258] Docker Info: &{ID:V36L:ETJO:IECX:PJF4:G3GB:JHA6:LGCF:VQBJ:D2GY:PVFO:567O:545Y Containers:66 ContainersRunning:0 ContainersPaused:0 ContainersStopped:66 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:15 OomKillDisable:true NGoroutines:22 SystemTime:2018-07-27T14:46:17.727178862+02:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-862.9.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420ebd110 NCPU:12 MemTotal:33386934272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]} docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc421016140} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json name=selinux]}
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.733181 1619 docker_service.go:271] Setting cgroupDriver to systemd
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.825381 1619 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.839306 1619 csi_plugin.go:111] kubernetes.io/csi: plugin initializing...
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.840955 1619 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841036 1619 server.go:986] Started kubelet
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841423 1619 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841448 1619 status_manager.go:152] Starting to sync pod status with apiserver
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841462 1619 kubelet.go:1758] Starting kubelet main sync loop.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841479 1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841710 1619 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.841754 1619 desired_state_of_world_populator.go:130] Desired state populator starts to run
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.842653 1619 server.go:302] Adding debug handlers to kubelet server.
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.868316 1619 kubelet.go:1261] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.872508 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-hostnamed.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.872925 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-journal-flush.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.873312 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.873703 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-remount-fs.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874064 1619 container.go:393] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874452 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-readahead-collect.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.874765 1619 container.go:393] Failed to create summary reader for "/system.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875097 1619 container.go:393] Failed to create summary reader for "/system.slice/kmod-static-nodes.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875392 1619 container.go:393] Failed to create summary reader for "/system.slice/irqbalance.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.875679 1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-dmesg.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876007 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-readahead-replay.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876289 1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876567 1619 container.go:393] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.876913 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-udev-trigger.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877200 1619 container.go:393] Failed to create summary reader for "/system.slice/kubelet.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877503 1619 container.go:393] Failed to create summary reader for "/system.slice/network.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.877792 1619 container.go:393] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878118 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878486 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-user-sessions.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.878912 1619 container.go:393] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.879312 1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-domainname.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.879802 1619 container.go:393] Failed to create summary reader for "/system.slice/lvm2-monitor.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880172 1619 container.go:393] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880491 1619 container.go:393] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.880788 1619 container.go:393] Failed to create summary reader for "/system.slice/docker.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881112 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-udevd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881402 1619 container.go:393] Failed to create summary reader for "/system.slice/kdump.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.881710 1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-import-state.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882166 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-random-seed.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882509 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup-dev.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.882806 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-tmpfiles-setup.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883115 1619 container.go:393] Failed to create summary reader for "/system.slice/rhel-readonly.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883420 1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager-dispatcher.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.883704 1619 container.go:393] Failed to create summary reader for "/system.slice/NetworkManager-wait-online.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884005 1619 container.go:393] Failed to create summary reader for "/system.slice/crond.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884329 1619 container.go:393] Failed to create summary reader for "/system.slice/system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884617 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-sysctl.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.884907 1619 container.go:393] Failed to create summary reader for "/system.slice/k8s-self-hosted-recover.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885213 1619 container.go:393] Failed to create summary reader for "/system.slice/lvm2-lvmetad.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885466 1619 container.go:393] Failed to create summary reader for "/user.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.885730 1619 container.go:393] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.886098 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-update-utmp.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.886384 1619 container.go:393] Failed to create summary reader for "/system.slice/systemd-vconsole-setup.service": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.913789 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917905 1619 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917923 1619 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.917935 1619 policy_none.go:42] [cpumanager] none policy: Start
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.926164 1619 event.go:212] Unable to write event: 'Post https://192.168.1.19:6443/api/v1/namespaces/default/events: dial tcp 192.168.1.19:6443: connect: connection refused' (may retry after sleeping)
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.932356 1619 container.go:393] Failed to create summary reader for "/libcontainer_1619_systemd_test_default.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941592 1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941762 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.944471 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.944714 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: Starting Device Plugin manager
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.986308 1619 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986668 1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986680 1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986749 1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 1619
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986755 1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 1619
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.144855 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.148528 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.148933 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.158503 1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-mon0-4txgr_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.300596 1619 pod_container_deletor.go:75] Container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.323729 1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-osd-id-0-54d59fc64b-c5tw4_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.516802 1619 pod_container_deletor.go:75] Container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.549067 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.552841 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.553299 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.674143 1619 pod_container_deletor.go:75] Container "96b85439f089170cf6161f5410f8970de67f0609d469105dff4e3d5ec2d10351" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.712440 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.713284 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.714397 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list v1.Pod: Get https://192.168.1.19:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:19 kubernetes kubelet[1619]: W0727 14:46:19.139032 1619 pod_container_deletor.go:75] Container "7b9757b85bc8ee4ce6ac954acf0bcd5c06b2ceb815aee802a8f53f9de18d967f" not found in pod's containers
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.932356 1619 container.go:393] Failed to create summary reader for "/libcontainer_1619_systemd_test_default.slice": none of the resources are being tracked.
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941592 1619 kubelet.go:1775] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.941762 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:17 kubernetes kubelet[1619]: I0727 14:46:17.944471 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.944714 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:17 kubernetes kubelet[1619]: Starting Device Plugin manager
Jul 27 14:46:17 kubernetes kubelet[1619]: E0727 14:46:17.986308 1619 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "kubernetes" not found
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986668 1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986680 1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 998
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986749 1619 container_manager_linux.go:792] CPUAccounting not enabled for pid: 1619
Jul 27 14:46:17 kubernetes kubelet[1619]: W0727 14:46:17.986755 1619 container_manager_linux.go:795] MemoryAccounting not enabled for pid: 1619
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.144855 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.148528 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.148933 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.158503 1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-mon0-4txgr_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.300596 1619 pod_container_deletor.go:75] Container "5b910771d1fd895b3b8d2feabdeb564cc57b213ae712416bdffec4a414dc4747" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.323729 1619 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "rook-ceph-osd-id-0-54d59fc64b-c5tw4_rook-ceph": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d"
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.516802 1619 pod_container_deletor.go:75] Container "a73305551840113b16cedd206109a837f57c6c3b2c8b1864ed5afab8b40b186d" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.549067 1619 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Jul 27 14:46:18 kubernetes kubelet[1619]: I0727 14:46:18.552841 1619 kubelet_node_status.go:79] Attempting to register node kubernetes
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.553299 1619 kubelet_node_status.go:103] Unable to register node "kubernetes" with API server: Post https://192.168.1.19:6443/api/v1/nodes: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: W0727 14:46:18.674143 1619 pod_container_deletor.go:75] Container "96b85439f089170cf6161f5410f8970de67f0609d469105dff4e3d5ec2d10351" not found in pod's containers
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.712440 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list v1.Service: Get https://192.168.1.19:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:18 kubernetes kubelet[1619]: E0727 14:46:18.713284 1619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.1.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes&limit=500&resourceVersion=0: dial tcp 192.168.1.19:6443: connect: connection refused
Jul 27 14:46:09 kubernetes systemd[1]: Starting Recovers self-hosted k8s after reboot... Jul 27 14:46:09 kubernetes k8s-self-hosted-recover[1001]: [k8s-self-hosted-recover] Restoring old plane... Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" Jul 27 14:46:12 kubernetes k8s-self-hosted-recover[1001]: [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" Jul 27 14:46:17 kubernetes k8s-self-hosted-recover[1001]: [k8s-self-hosted-recover] Waiting while the api server is back..