cookeem / kubeadm-ha

通过kubeadm安装kubernetes高可用集群,使用docker/containerd容器运行时,适用v1.24.x以上版本
MIT License
678 stars 275 forks source link

您好,问下K8SHA_CALICO_REACHABLE_IP #38

Closed dotbalo closed 6 years ago

dotbalo commented 6 years ago

calico reachable ip address

export K8SHA_CALICO_REACHABLE_IP=192.168.60.1 您好,这一步的参数的192.168.60.1这个IP是我们服务器的网关地址吗?还是calico自用的IP和内网环境无关?

cookeem commented 6 years ago

这个设置一般是用于多网卡情况下才需要配置,因为calico需要识别你使用哪个网卡作为绑定的网络端口,因此这个地址最好是指服务器内网的网关,就是所有kubernetes节点都能够访问的内网地址。

dotbalo commented 6 years ago

好的,谢谢,我设置成服务器内网的网关

dotbalo commented 6 years ago

您好,初始化的时候报这个错误Oct 29 16:00:38 k8s-master01 kubelet: W1029 16:00:38.964884 3396 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d Oct 29 16:00:38 k8s-master01 kubelet: E1029 16:00:38.964988 3396 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Oct 29 16:00:43 k8s-master01 kubelet: W1029 16:00:43.965647 3396 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d Oct 29 16:00:43 k8s-master01 kubelet: E1029 16:00:43.965779 3396 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

cookeem commented 6 years ago

确认一下 cni组件是否安装成功。用kubeadm reset重置一下再进行重装看看问题依然存在?

dotbalo commented 6 years ago

我已经安装了cni,然后reset后再次init,发现卡在了this might take a minute or longer if the control plane images have to be pulled,但是我的镜像都是提前下载好的。 [root@k8s-master01 bin]# docker images | grep gcr gcr.io/istio-release/galley 1.0.0 492a18a44119 3 months ago 65.8MB gcr.io/istio-release/grafana 1.0.0 95aa697d9bbe 3 months ago 301MB gcr.io/istio-release/citadel 1.0.0 db952d07e89b 3 months ago 51.6MB gcr.io/istio-release/mixer 1.0.0 1fe406b9e272 3 months ago 64.4MB gcr.io/istio-release/servicegraph 1.0.0 5f5bba04bf25 3 months ago 11.2MB gcr.io/istio-release/sidecar_injector 1.0.0 6af3c2187c8c 3 months ago 45.3MB gcr.io/istio-release/proxy_init 1.0.0 c7c94fe3e39c 3 months ago 119MB gcr.io/istio-release/proxyv2 1.0.0 f2e16d78e5ee 3 months ago 351MB gcr.io/istio-release/pilot 1.0.0 47be4debdda6 3 months ago 289MB k8s.gcr.io/kube-proxy-amd64 v1.11.1 d5c25579d0ff 3 months ago 97.8MB k8s.gcr.io/kube-scheduler-amd64 v1.11.1 272b3a60cd68 3 months ago 56.8MB k8s.gcr.io/kube-controller-manager-amd64 v1.11.1 52096ee87d0e 3 months ago 155MB k8s.gcr.io/kube-apiserver-amd64 v1.11.1 816332bd9d11 3 months ago 187MB k8s.gcr.io/coredns 1.1.3 b3b94275d97c 5 months ago 45.6MB gcr.io/google_containers/kubernetes-dashboard-amd64 v1.8.3 0c60bcf89900 8 months ago 102MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 10 months ago 742kB gcr.io/google_containers/metrics-server-amd64 v0.2.1 9801395070f3 10 months ago 42.5MB k8s.gcr.io/etcd-amd64 3.2.18 a9308c067589 2 years ago 28.2MB k8s.gcr.io/heapster-amd64 v1.5.4 d2d2bfdfb48f 3 years ago 34.5MB k8s.gcr.io/heapster-grafana-amd64 v5.0.4 b2951ae24ffe 3 years ago 250MB k8s.gcr.io/heapster-influxdb-amd64 v1.5.2 10eaf557026e 3 years ago 275MB

然后日志如下: Oct 30 11:27:15 k8s-master01 kubelet: I1030 11:27:15.767346 25331 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach Oct 30 11:27:25 k8s-master01 kubelet: I1030 11:27:25.788717 25331 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach Oct 30 11:27:35 k8s-master01 kubelet: I1030 11:27:35.810285 25331 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach Oct 30 11:27:45 k8s-master01 kubelet: I1030 11:27:45.831823 25331 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach Oct 30 11:27:55 k8s-master01 kubelet: I1030 11:27:55.854005 25331 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach Oct 30 11:28:05 k8s-master01 kubelet: I1030 11:28:05.875925 25331 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach Oct 30 11:28:15 k8s-master01 kubelet: I1030 11:28:15.899547 25331 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach

最后还是没有初始化成功: 报错如下: [root@k8s-master01 ~]# kubeadm init --config /root/kubeadm-config.yaml [init] using Kubernetes version: v1.11.1 [preflight] running pre-flight checks I1030 11:23:23.722106 25220 kernel_validator.go:81] Validating kernel version I1030 11:23:23.722199 25220 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-master01 k8s-master02 k8s-master03 k8s-master-lb] and IPs [10.96.0.1 192.168.20.20 192.168.20.20 192.168.20.21 192.168.20.22 192.168.20.10] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [k8s-master01 localhost k8s-master01] and IPs [127.0.0.1 ::1 192.168.20.20] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost k8s-master01] and IPs [192.168.20.20 127.0.0.1 ::1 192.168.20.20] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
        - No internet connection is available so the kubelet cannot pull or find the following control plane images:
            - k8s.gcr.io/kube-apiserver-amd64:v1.11.1
            - k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
            - k8s.gcr.io/kube-scheduler-amd64:v1.11.1
            - k8s.gcr.io/etcd-amd64:3.2.18
            - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
              are downloaded locally and cached.

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'

couldn't initialize a Kubernetes cluster

请问这个是因为不能访问国外的网络引起的吗?因为我翻了墙还是不能下载gcr的镜像。

cookeem commented 6 years ago

打开一个终端,查看一下kubelet的日志吧。我都是内网情况下先下载好相关镜像的。

journalctl -f -t kubelet

另外,把你的kubeamd-config.yaml贴上来看看?

dotbalo commented 6 years ago

问题已解决,是镜像问题