canonical / microk8s

MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
https://microk8s.io
Apache License 2.0
8.43k stars 770 forks source link

Hitting Error: with microk8s enable dns storage ..... 127.0.0.1:16443: connect: connection refused #3178

Closed psakamoori closed 2 years ago

psakamoori commented 2 years ago

microk8s enable storage ingress metallb:10.64.140.43-10.64.140.49 Enabling default storage class [sudo] password for psakamoori: deployment.apps/hostpath-provisioner unchanged storageclass.storage.k8s.io/microk8s-hostpath unchanged serviceaccount/microk8s-hostpath unchanged clusterrole.rbac.authorization.k8s.io/microk8s-hostpath unchanged clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath unchanged Storage will be available soon Enabling Ingress ingressclass.networking.k8s.io/public unchanged namespace/ingress unchanged serviceaccount/nginx-ingress-microk8s-serviceaccount unchanged clusterrole.rbac.authorization.k8s.io/nginx-ingress-microk8s-clusterrole unchanged role.rbac.authorization.k8s.io/nginx-ingress-microk8s-role unchanged clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s unchanged rolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created configmap/nginx-load-balancer-microk8s-conf created configmap/nginx-ingress-tcp-microk8s-conf created configmap/nginx-ingress-udp-microk8s-conf created daemonset.apps/nginx-ingress-microk8s-controller created Ingress is enabled Enabling MetalLB Applying Metallb manifest namespace/metallb-system created secret/memberlist created Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/controller created podsecuritypolicy.policy/speaker created serviceaccount/controller created serviceaccount/speaker created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created role.rbac.authorization.k8s.io/config-watcher created role.rbac.authorization.k8s.io/pod-lister created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/config-watcher created error when creating "STDIN": Post "https://127.0.0.1:16443/apis/rbac.authorization.k8s.io/v1/namespaces/metallb-system/rolebindings?fieldManager=kubectl-client-side-apply": unexpected EOF error when retrieving current configuration of: Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet" Name: "speaker", Namespace: "metallb-system" from server for: "STDIN": Get "https://127.0.0.1:16443/apis/apps/v1/namespaces/metallb-system/daemonsets/speaker": dial tcp 127.0.0.1:16443: connect: connection refused error when retrieving current configuration of: Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment" Name: "controller", Namespace: "metallb-system" from server for: "STDIN": Get "https://127.0.0.1:16443/apis/apps/v1/namespaces/metallb-system/deployments/controller": dial tcp 127.0.0.1:16443: connect: connection refused error when retrieving current configuration of: Resource: "/v1, Resource=configmaps", GroupVersionKind: "/v1, Kind=ConfigMap" Name: "config", Namespace: "metallb-system" from server for: "STDIN": Get "https://127.0.0.1:16443/api/v1/namespaces/metallb-system/configmaps/config": dial tcp 127.0.0.1:16443: connect: connection refused

Appreciate any help... My kubernetes master node/api-server is hosted on 192.168.0.26

$sudo microk8s enable dns storage ingress metallb:10.64.140.43-10.64.140.49 Enabling DNS Applying manifest The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?

=== /etc/hosts file === 127.0.0.1 localhost 127.0.1.1 pvlab

The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters

====Tried below: But did not help:====

sudo microk8s.refresh-certs --cert server.crt Taking a backup of the current certificates under /var/snap/microk8s/3202/certs-backup/ Creating new certificates Signature ok subject=C = GB, ST = Canonical, L = Canonical, O = Canonical, OU = Canonical, CN = 127.0.0.1 Getting CA Private Key Restarting service kubelite. Restarting service cluster-agent.

$ sudo microk8s.refresh-certs --cert ca.crt Taking a backup of the current certificates under /var/snap/microk8s/3202/certs-backup/ Creating new certificates Can't load /root/.rnd into RNG 140534406043072:error:2406F079:random number generator:RAND_load_file:Cannot open file:../crypto/rand/randfile.c:88:Filename=/root/.rnd Signature ok subject=C = GB, ST = Canonical, L = Canonical, O = Canonical, OU = Canonical, CN = 127.0.0.1 Getting CA Private Key Signature ok subject=CN = front-proxy-client Getting CA Private Key 1 Creating new kubeconfig file Run service command "stop" for services ["daemon-apiserver" "daemon-apiserver-kicker" "daemon-cluster-agent" "daemon-containerd" "daemon-control-plane-kicker" "daemon-controller-manager… Stopped. Started.

The CA certificates have been replaced. Kubernetes will restart the pods of your workloads. Any worker nodes you may have in your cluster need to be removed and re-joined to become aware of the new CA.

================== DNs Pods running good

psakamoori@pvlab:~/.kube$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-dm98k 1/1 Running 0 4h18m kube-system coredns-6d4b75cb6d-nq459 1/1 Running 0 4h18m kube-system etcd-pvlab 1/1 Running 4 5d2h kube-system kube-apiserver-pvlab 1/1 Running 6 5d2h kube-system kube-controller-manager-pvlab 1/1 Running 5 (6m59s ago) 5d2h kube-system kube-flannel-ds-jjxx6 1/1 Running 0 3d21h kube-system kube-flannel-ds-mwzgv 1/1 Running 0 3d21h kube-system kube-proxy-27s2q 1/1 Running 0 3d21h kube-system kube-proxy-4rbng 1/1 Running 0 5d2h kube-system kube-scheduler-pvlab 1/1 Running 22 (6m59s ago) 5d2h

Appreciate any inputs on how to get this resolved...I am beginner in this space..

psakamoori commented 2 years ago

With mick8s inspect...I am seeing below error:

May 30 14:14:21 pvlab microk8s.daemon-apiserver-kicker[2100942]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 14:28:12 pvlab microk8s.daemon-apiserver-kicker[2134570]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-2': No such file or directory May 30 14:29:08 pvlab microk8s.daemon-apiserver-kicker[2136889]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/.probe': No such file or directory May 30 14:34:28 pvlab microk8s.daemon-apiserver-kicker[2149742]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-3': No such file or directory May 30 14:37:02 pvlab microk8s.daemon-apiserver-kicker[2155976]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 14:38:56 pvlab microk8s.daemon-apiserver-kicker[2160773]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 14:42:37 pvlab microk8s.daemon-apiserver-kicker[2169822]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-3': No such file or directory May 30 14:50:22 pvlab microk8s.daemon-apiserver-kicker[2188561]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-2': No such file or directory May 30 14:57:05 pvlab microk8s.daemon-apiserver-kicker[2204918]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 14:59:20 pvlab microk8s.daemon-apiserver-kicker[2210380]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 15:01:20 pvlab microk8s.daemon-apiserver-kicker[2215150]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-2': No such file or directory May 30 15:26:19 pvlab microk8s.daemon-apiserver-kicker[2275982]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/open-2': No such file or directory May 30 15:30:59 pvlab microk8s.daemon-apiserver-kicker[2287331]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-2': No such file or directory May 30 15:45:24 pvlab microk8s.daemon-apiserver-kicker[2323286]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 15:46:10 pvlab microk8s.daemon-apiserver-kicker[2325229]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 15:49:36 pvlab microk8s.daemon-apiserver-kicker[2334204]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/.probe': No such file or directory May 30 15:51:05 pvlab microk8s.daemon-apiserver-kicker[2337792]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 15:51:58 pvlab microk8s.daemon-apiserver-kicker[2339933]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/open-3': No such file or directory May 30 16:02:26 pvlab microk8s.daemon-apiserver-kicker[2365488]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/open-2': No such file or directory May 30 16:03:27 pvlab microk8s.daemon-apiserver-kicker[2368168]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 16:04:32 pvlab systemd[1]: Stopping Service for snap application microk8s.daemon-apiserver-kicker... May 30 16:04:32 pvlab systemd[1]: snap.microk8s.daemon-apiserver-kicker.service: Succeeded. May 30 16:04:32 pvlab systemd[1]: Stopped Service for snap application microk8s.daemon-apiserver-kicker. May 30 16:04:32 pvlab systemd[1]: Started Service for snap application microk8s.daemon-apiserver-kicker. May 30 16:04:37 pvlab microk8s.daemon-apiserver-kicker[2371049]: The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port? May 30 16:04:37 pvlab systemd[1]: snap.microk8s.daemon-apiserver-kicker.service: Main process exited, code=exited, status=1/FAILURE May 30 16:04:37 pvlab systemd[1]: snap.microk8s.daemon-apiserver-kicker.service: Failed with result 'exit-code'. May 30 16:04:37 pvlab systemd[1]: snap.microk8s.daemon-apiserver-kicker.service: Scheduled restart job, restart counter is at 1 . .... ..... May 30 16:20:29 pvlab systemd[1]: snap.microk8s.daemon-apiserver-kicker.service: Succeeded. May 30 16:20:29 pvlab systemd[1]: Stopped Service for snap application microk8s.daemon-apiserver-kicker. May 30 16:21:03 pvlab systemd[1]: Started Service for snap application microk8s.daemon-apiserver-kicker. May 30 16:21:28 pvlab microk8s.daemon-apiserver-kicker[2413711]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 16:22:36 pvlab microk8s.daemon-apiserver-kicker[2417496]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-3': No such file or directory May 30 16:28:29 pvlab microk8s.daemon-apiserver-kicker[2431855]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 16:32:05 pvlab microk8s.daemon-apiserver-kicker[2440983]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-3': No such file or directory May 30 16:39:59 pvlab microk8s.daemon-apiserver-kicker[2460148]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 16:48:55 pvlab microk8s.daemon-apiserver-kicker[2482172]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 16:59:11 pvlab microk8s.daemon-apiserver-kicker[2507326]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/open-2': No such file or directory May 30 20:22:41 pvlab microk8s.daemon-apiserver-kicker[2517451]: Signature ok May 30 20:22:41 pvlab microk8s.daemon-apiserver-kicker[2517451]: subject=C = GB, ST = Canonical, L = Canonical, O = Canonical, OU = Canonical, CN = 127.0.0.1 May 30 20:22:41 pvlab microk8s.daemon-apiserver-kicker[2517451]: Getting CA Private Key May 30 20:22:41 pvlab microk8s.daemon-apiserver-kicker[2517456]: Signature ok May 30 20:22:41 pvlab microk8s.daemon-apiserver-kicker[2517456]: subject=CN = front-proxy-client May 30 20:22:41 pvlab microk8s.daemon-apiserver-kicker[2517456]: Getting CA Private Key May 30 20:22:41 pvlab microk8s.daemon-apiserver-kicker[2412231]: CSR change detected. Reconfiguring the kube-apiserver May 30 20:22:49 pvlab microk8s.daemon-apiserver-kicker[2517617]: Signature ok May 30 20:22:49 pvlab microk8s.daemon-apiserver-kicker[2517617]: subject=C = GB, ST = Canonical, L = Canonical, O = Canonical, OU = Canonical, CN = 127.0.0.1 May 30 20:22:49 pvlab microk8s.daemon-apiserver-kicker[2517617]: Getting CA Private Key May 30 20:22:49 pvlab microk8s.daemon-apiserver-kicker[2517622]: Signature ok May 30 20:22:49 pvlab microk8s.daemon-apiserver-kicker[2517622]: subject=CN = front-proxy-client May 30 20:22:49 pvlab microk8s.daemon-apiserver-kicker[2517622]: Getting CA Private Key May 30 20:22:49 pvlab microk8s.daemon-apiserver-kicker[2412231]: CSR change detected. Reconfiguring the kube-apiserver May 30 20:23:36 pvlab microk8s.daemon-apiserver-kicker[2519596]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/open-2': No such file or directory May 30 20:29:42 pvlab microk8s.daemon-apiserver-kicker[2534528]: chgrp: changing group of '/var/snap/microk8s/3202/var/kubernetes/backend/open-1': No such file or directory May 30 20:29:54 pvlab microk8s.daemon-apiserver-kicker[2535007]: chmod: cannot access '/var/snap/microk8s/3202/var/kubernetes/backend/kine.sock': No such file or directory

neoaggelos commented 2 years ago

Hi @psakamoori, how did you install microk8s? i see the control plane components running as pods.

psakamoori@pvlab:~/.kube$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-dm98k 1/1 Running 0 4h18m
kube-system coredns-6d4b75cb6d-nq459 1/1 Running 0 4h18m
kube-system etcd-pvlab 1/1 Running 4 5d2h
kube-system kube-apiserver-pvlab 1/1 Running 6 5d2h
kube-system kube-controller-manager-pvlab 1/1 Running 5 (6m59s ago) 5d2h
kube-system kube-flannel-ds-jjxx6 1/1 Running 0 3d21h
kube-system kube-flannel-ds-mwzgv 1/1 Running 0 3d21h
kube-system kube-proxy-27s2q 1/1 Running 0 3d21h
kube-system kube-proxy-4rbng 1/1 Running 0 5d2h
kube-system kube-scheduler-pvlab 1/1 Running 22 (6m59s ago) 5d2h
psakamoori commented 2 years ago

Hi @neoaggelos , Below are the steps I followed

Step 1. Created kubernetes cluster using kubeadm as given here "https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/"

Step 2: As I want to try MLops pipeline using kubeflow - following instruction given here "https://charmed-kubeflow.io/docs/quickstart" at this step "microk8s enable dns storage ingress metallb:10.64.140.43-10.64.140.49" I am getting above error.

Thank you.

neoaggelos commented 2 years ago

Well, in this case, you are not installing MicroK8s. You are manually bootstrapping a cluster using Kubeadm, hence you are getting these errors.

If you want to follow the Charmed Kubeflow docs with MicroK8s, you would need to tear-down the cluster you have setup with kubeadm, then install MicroK8s as explained in https://microk8s.io/docs/getting-started

Note that (currently) KubeFlow requires Kubernetes version 1.21 or older. For MicroK8s, this is done with:

sudo snap install microk8s --classic --channel 1.21
psakamoori commented 2 years ago

Got it. Thank you. Will try it out.

psakamoori commented 2 years ago

I ran "kubeadm reset" to remove the existing cluster and able to perform below steps

  1. sudo snap install microk8s --classic --channel=1.21/stable
  2. sudo usermod -a -G microk8s $USER & newgrp microk8s
  3. sudo chown -f -R $USER ~/.kube
  4. microk8s enable dns storage ingress metallb:10.64.140.43-10.64.140.49

logs:

reset: psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ sudo kubeadm reset [sudo] password for psakamoori: [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0531 12:09:50.239333 3384522 preflight.go:55] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.

[preflight] Running pre-flight checks [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.

psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ sudo kubectl version --short Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Client Version: v1.24.0 Kustomize Version: v4.5.4 The connection to the server localhost:8080 was refused - did you specify the right host or port? psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ sudo snap install microk8s --classic --channel=1.24 snap "microk8s" is already installed, see 'snap help refresh' psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ sudo usermod -a -G microk8s $USER psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ sudo chown -f -R $USER ~/.kube psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ sudo snap install microk8s --classic --channel=1.21/stable snap "microk8s" is already installed, see 'snap help refresh' psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ sudo usermod -a -G microk8s $USER psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ newgrp microk8s (base) psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ sudo chown -f -R $USER ~/.kube (base) psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ microk8s enable dns storage ingress metallb:10.64.140.43-10.64.140.49 Addon dns is already enabled. Addon storage is already enabled. Addon ingress is already enabled. MetalLB already enabled. (base) psakamoori@pvlab:/var/snap/microk8s/3202/credentials$

Now I am trying to check the pods that are running in micrk8s namespace with kubectl command as shown below..but hitting error

(base) psakamoori@pvlab:/var/snap/microk8s/3202/credentials$ kubectl get pods --all-namespaces The connection to the server 192.168.0.26:6443 was refused - did you specify the right host or port?

I tried restarting "kubectl" with systemctl restart kubelet - But, no luck.

psakamoori commented 2 years ago

I see - I need to use below command to get the nodes

psakamoori@pvlab:~/.kube$ microk8s kubectl get nodes NAME STATUS ROLES AGE VERSION pvlab Ready 35m v1.21.12-3+6937f71915b56b

psakamoori@pvlab:~/.kube$ microk8s kubectl get pods No resources found in default namespace.

psakamoori@pvlab:~/.kube$ microk8s kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-fspn9 1/1 Running 0 38m kube-system coredns-7f9c69c78c-brsjl 1/1 Running 0 38m ingress nginx-ingress-microk8s-controller-zz2sk 1/1 Running 0 38m kube-system hostpath-provisioner-566686b959-62j5k 1/1 Running 0 38m kube-system calico-kube-controllers-f7868dd95-bqx4b 1/1 Running 0 38m

Hope my understanding is correct here..

psakamoori commented 2 years ago

Able to resolve as steps explained above