ecomm-integration-ballerina / kubernetes-cluster

Kubernetes cluster using Vagrant, VirtualBox and Kubeadm
Apache License 2.0
189 stars 163 forks source link

Error concerning calico.yaml #10

Open paulrusu8 opened 5 years ago

paulrusu8 commented 5 years ago

When I try to deploy the environment, I get error:

k8s-head: unable to recognize "https://raw.githubusercontent.com/ecomm-integration-ballerina/kubernetes-cluster/master/calico/calico.yaml": no matches for kind "Deployment" in version "apps/v1beta1" k8s-head: unable to recognize "https://raw.githubusercontent.com/ecomm-integration-ballerina/kubernetes-cluster/master/calico/calico.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

Does anyone else also encounter this issue ?

paulrusu8 commented 5 years ago

In addition to that, when sshing into the master node and running

kubectl describe pod coredns-5644d7b6d9-2tmzt,

I get following warning:

Warning NetworkNotReady 113s (x152 over 6m54s) kubelet, k8s-head network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

FilipVPetrov commented 4 years ago

The same issue on my side, a month ago I managed to deploy cluster locally, now having those issues.

FilipVPetrov commented 4 years ago

Also, I`m not able to list resources(node, pods, etc) from worker nodes.

vagrant@k8s-node-1:~$ kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?

josepvindas commented 4 years ago

Also, I`m not able to list resources(node, pods, etc) from worker nodes.

vagrant@k8s-node-1:~$ kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?

This is because by default, the nodes are not configured to run such operations. If you run kubectl config view from the Master node, you'll see that there are configurations for a cluster, a server name, an IP address, etc. if you run the same command from either node, however, you'll see that these fields are left empty.

In order to enable such operations from the nodes, you can run sudo cat /etc/kubernetes/admin.conf from the Master node, and paste the output into a file of the same name and directory in each of your worker nodes.

Once this is done, run export KUBECONFIG=/etc/kubernetes/admin.conf and you should now be able to list resources from your worker nodes, as well as creating Deployments, Pods, Services, etc.

Deepak-Routray commented 4 years ago

I faced the same issue today. Looks to be specification difference for latest kubernetes version. fixed the same by using older kubernetes version (1.15.0). Changes in below statements in Vagrantfile. Added k8s version. And it fixed my issues. Ideally calico.yaml should be updated to latest k8s version though.

"apt-get install -y kubelet=1.15.0-00 kubectl=1.15.0-00 kubeadm=1.15.0-00"

"kubeadm init --kubernetes-version="1.15.0" --apiserver-advertise-address=$IP_ADDR --apiserver-cert-extra-sans=$IP_ADDR --node-name $HOST_NAME --pod-network-cidr=172.16.0.0/16"

Thanks, Deepak