canonical / microk8s

MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
https://microk8s.io
Apache License 2.0
8.57k stars 772 forks source link

microk8s status not work #4302

Closed GithubRyze closed 1 year ago

GithubRyze commented 1 year ago

Summary

Version info

MicroK8s v1.28.3 revision 6089

logs

ewell@SZ-K8sMaster1Ubuntu:~/yaml$ microk8s status
microk8s is not running. Use microk8s inspect for a deeper inspection.
ewell@SZ-K8sMaster1Ubuntu:~/yaml$ microk8s status
microk8s is not running. Use microk8s inspect for a deeper inspection.
ewell@SZ-K8sMaster1Ubuntu:~/yaml$ microk8s status
Traceback (most recent call last):
  File "/snap/microk8s/6089/scripts/wrappers/status.py", line 221, in <module>
    enabled, disabled = get_status(available_addons, is_ready)
  File "/snap/microk8s/6089/scripts/wrappers/common/utils.py", line 566, in get_status
    kube_output = kubectl_get("all,ingress")
  File "/snap/microk8s/6089/scripts/wrappers/common/utils.py", line 248, in kubectl_get
    return run(KUBECTL, "get", cmd, "--all-namespaces", die=False)
  File "/snap/microk8s/6089/scripts/wrappers/common/utils.py", line 69, in run
    result.check_returncode()
  File "/snap/microk8s/6089/usr/lib/python3.8/subprocess.py", line 448, in check_returncode
    raise CalledProcessError(self.returncode, self.args, self.stdout,
subprocess.CalledProcessError: Command '('/snap/microk8s/6089/microk8s-kubectl.wrapper', 'get', 'all,ingress', '--all-namespaces')' returned non-zero exit status 1.
ewell@SZ-K8sMaster1Ubuntu:~/yaml$ microk8s status
microk8s is not running. Use microk8s inspect for a deeper inspection.
ewell@SZ-K8sMaster1Ubuntu:~/yaml$ microk8s status
microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none
addons:
  enabled:
    dashboard            # (core) The Kubernetes dashboard
    dns                  # (core) CoreDNS
    ha-cluster           # (core) Configure high availability on the current node
    helm                 # (core) Helm - the package manager for Kubernetes
    helm3                # (core) Helm 3 - the package manager for Kubernetes
    metrics-server       # (core) K8s Metrics Server for API access to service metrics
  disabled:
    cert-manager         # (core) Cloud native certificate management
    cis-hardening        # (core) Apply CIS K8s hardening
    community            # (core) The community addons repository
    gpu                  # (core) Automatic enablement of Nvidia CUDA
    host-access          # (core) Allow Pods connecting to Host services smoothly
    hostpath-storage     # (core) Storage class; allocates storage from host directory
    ingress              # (core) Ingress controller for external access
    kube-ovn             # (core) An advanced network fabric for Kubernetes
    mayastor             # (core) OpenEBS MayaStor
    metallb              # (core) Loadbalancer for your Kubernetes cluster
    minio                # (core) MinIO object storage
    observability        # (core) A lightweight observability stack for logs, traces and metrics
    prometheus           # (core) Prometheus operator for monitoring and logging
    rbac                 # (core) Role-Based Access Control for authorisation
    registry             # (core) Private image registry exposed on localhost:32000
    rook-ceph            # (core) Distributed Ceph storage using Rook
    storage              # (core) Alias to hostpath-storage add-on, deprecated

AND Other command also like not work

ewell@SZ-K8sMaster1Ubuntu:~/yaml$ kubectl get all
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.152.183.1   <none>        443/TCP   19h
ewell@SZ-K8sMaster1Ubuntu:~/yaml$ kubectl get all
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?

What Should Happen Instead?

Reproduction Steps

  1. ...
  2. ...

Introspection Report

Can you suggest a fix?

Microk8s inspect

inspection-report-20231115_134654.tar.gz

neoaggelos commented 1 year ago

Hi @GithubRyze

I see Kubelet failing to start because it cannot use the containerd socket

Nov 15 13:34:55 SZ-K8sMaster1Ubuntu microk8s.daemon-kubelite[376332]: F1115 13:34:55.400614  376332 daemon.go:57] Kubelet exited failed to run Kubelet: validate service connection: validate CRI v1 runtime API for endpoint "/var/snap/microk8s/common/run/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService

and I also see the following in the containerd logs:

Nov 15 10:23:27 SZ-K8sMaster1Ubuntu microk8s.daemon-containerd[3879370]: time="2023-11-15T10:23:27.119865600+08:00" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config: `configs.tls` cannot be set when `config_path` is provided"

Looking at the config files, I believe this comes from this additional config in the containerd-template.toml:

    [plugins."io.containerd.grpc.v1.cri".registry.configs."10.18.12.23".tls]

Please refer to the docs with more information about how to configure your registries. https://microk8s.io/docs/registry-private

GithubRyze commented 1 year ago

ok ths