kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.14k stars 4.86k forks source link

minikube dashboard can't open browser due to xdg-open command not found. #10163

Closed minicloudsky closed 3 years ago

minicloudsky commented 3 years ago

Steps to reproduce the issue: OS version CentOS 8.3.2011

  1. install minikube v1.16.0
  2. install docker version 20.10.2, build 2291f61
  3. run minikube start --driver=docker
  4. run minikube dashboard

Full output of failed command:

[mini@tencentcloud ~]$ minikube dashboard

X Exiting due to HOST_BROWSER: failed to open browser: exit status 3

Full output of minikube start command used, if not already included:

[mini@tencentcloud ~]$ minikube start

! Exiting due to GUEST_DRIVER_MISMATCH: The existing "minikube" cluster was created using the "docker" driver, which is incompatible with requested "none" driver.

Optional: Full output of minikube logs command:

* ==> Docker <== * -- Logs begin at Fri 2021-01-15 10:25:52 UTC, end at Mon 2021-01-18 08:05:03 UTC. -- * Jan 15 10:25:52 minikube systemd[1]: Starting Docker Application Container Engine... * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.028610604Z" level=info msg="Starting up" * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.029722654Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.029747227Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.029765335Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.029774851Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.030838086Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.030857726Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.030871402Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.030879148Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.084888532Z" level=warning msg="Your kernel does not support cgroup blkio weight" * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.084915773Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.085041503Z" level=info msg="Loading containers: start." * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.155128673Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.195481147Z" level=info msg="Loading containers: done." * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.215267032Z" level=info msg="Docker daemon" commit=eeddea2 graphdriver(s)=overlay2 version=20.10.0 * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.215350346Z" level=info msg="Daemon has completed initialization" * Jan 15 10:25:53 minikube systemd[1]: Started Docker Application Container Engine. * Jan 15 10:25:53 minikube dockerd[178]: time="2021-01-15T10:25:53.246496123Z" level=info msg="API listen on /run/docker.sock" * Jan 15 10:25:56 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. * Jan 15 10:25:56 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. * Jan 15 10:25:56 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. * Jan 15 10:25:56 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. * Jan 15 10:25:56 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. * Jan 15 10:25:56 minikube systemd[1]: Stopping Docker Application Container Engine... * Jan 15 10:25:56 minikube dockerd[178]: time="2021-01-15T10:25:56.682233497Z" level=info msg="Processing signal 'terminated'" * Jan 15 10:25:56 minikube dockerd[178]: time="2021-01-15T10:25:56.683579702Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = transport is closing" module=libcontainerd namespace=moby * Jan 15 10:25:56 minikube dockerd[178]: time="2021-01-15T10:25:56.683608794Z" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby * Jan 15 10:25:56 minikube dockerd[178]: time="2021-01-15T10:25:56.683646860Z" level=warning msg="Error while testing if containerd API is ready" error="rpc error: code = Canceled desc = grpc: the client connection is closing" * Jan 15 10:25:56 minikube dockerd[178]: time="2021-01-15T10:25:56.684311356Z" level=info msg="Daemon shutdown complete" * Jan 15 10:25:56 minikube systemd[1]: docker.service: Succeeded. * Jan 15 10:25:56 minikube systemd[1]: Stopped Docker Application Container Engine. * Jan 15 10:25:56 minikube systemd[1]: Starting Docker Application Container Engine... * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.736405203Z" level=info msg="Starting up" * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.739641737Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.739673537Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.739707602Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.739729467Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.742896602Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.742918301Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.742932633Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.742941052Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.755488109Z" level=info msg="[graphdriver] using prior storage driver: overlay2" * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.760665983Z" level=warning msg="Your kernel does not support cgroup blkio weight" * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.760686374Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.760796746Z" level=info msg="Loading containers: start." * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.910847546Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.952606108Z" level=info msg="Loading containers: done." * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.971898603Z" level=info msg="Docker daemon" commit=eeddea2 graphdriver(s)=overlay2 version=20.10.0 * Jan 15 10:25:56 minikube dockerd[412]: time="2021-01-15T10:25:56.971953676Z" level=info msg="Daemon has completed initialization" * Jan 15 10:25:56 minikube systemd[1]: Started Docker Application Container Engine. * Jan 15 10:25:57 minikube dockerd[412]: time="2021-01-15T10:25:56.999534988Z" level=info msg="API listen on [::]:2376" * Jan 15 10:25:57 minikube dockerd[412]: time="2021-01-15T10:25:57.002224250Z" level=info msg="API listen on /var/run/docker.sock" * Jan 15 10:25:59 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. * Jan 15 10:26:21 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. * Jan 15 10:26:37 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. * Jan 18 07:28:20 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. * Jan 18 07:40:22 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * 0eb98ddd6423f kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf 2 days ago Running dashboard-metrics-scraper 0 f6fc9a3fec555 * 25cc7c57ad688 kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 2 days ago Running kubernetes-dashboard 0 519dd53df1fa1 * 604ac801b4c51 85069258b98ac 2 days ago Running storage-provisioner 0 0abfda7551b19 * d9e59af5588db bfe3a36ebd252 2 days ago Running coredns 0 f908b98844b3b * eef77a078b28e 10cc881966cfd 2 days ago Running kube-proxy 0 ab5514b5f33b9 * dae83368cb9cd 3138b6e3d4712 2 days ago Running kube-scheduler 0 73c18a502844c * 51b23309588f1 ca9843d3b5454 2 days ago Running kube-apiserver 0 0d04ae9df428d * 26d88b9838b50 b9fa1895dcaa6 2 days ago Running kube-controller-manager 0 53ea0a5bf53de * c016f6456c127 0369cf4303ffd 2 days ago Running etcd 0 37507efd6dc3f * * ==> coredns [d9e59af5588d] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 * CoreDNS-1.7.0 * linux/amd64, go1.14.4, f59c03d * [ERROR] plugin/errors: 2 6365689534244864444.557184265316414268. HINFO: read udp 172.17.0.2:46199->192.168.49.1:53: i/o timeout * [ERROR] plugin/errors: 2 6365689534244864444.557184265316414268. HINFO: read udp 172.17.0.2:39487->192.168.49.1:53: i/o timeout * [ERROR] plugin/errors: 2 6365689534244864444.557184265316414268. HINFO: read udp 172.17.0.2:51724->192.168.49.1:53: i/o timeout * [ERROR] plugin/errors: 2 6365689534244864444.557184265316414268. HINFO: read udp 172.17.0.2:39246->192.168.49.1:53: i/o timeout * [ERROR] plugin/errors: 2 6365689534244864444.557184265316414268. HINFO: read udp 172.17.0.2:49589->192.168.49.1:53: i/o timeout * [ERROR] plugin/errors: 2 6365689534244864444.557184265316414268. HINFO: read udp 172.17.0.2:36859->192.168.49.1:53: i/o timeout * [ERROR] plugin/errors: 2 6365689534244864444.557184265316414268. HINFO: read udp 172.17.0.2:48171->192.168.49.1:53: i/o timeout * [ERROR] plugin/errors: 2 6365689534244864444.557184265316414268. HINFO: read udp 172.17.0.2:47792->192.168.49.1:53: i/o timeout * [ERROR] plugin/errors: 2 6365689534244864444.557184265316414268. HINFO: read udp 172.17.0.2:32988->192.168.49.1:53: i/o timeout * [ERROR] plugin/errors: 2 6365689534244864444.557184265316414268. HINFO: read udp 172.17.0.2:55627->192.168.49.1:53: i/o timeout * * ==> describe nodes <== * Name: minikube * Roles: control-plane,master * Labels: beta.kubernetes.io/arch=amd64 * beta.kubernetes.io/os=linux * kubernetes.io/arch=amd64 * kubernetes.io/hostname=minikube * kubernetes.io/os=linux * minikube.k8s.io/commit=617f26b52345843a63d1a0715c4abf6625cb8862 * minikube.k8s.io/name=minikube * minikube.k8s.io/updated_at=2021_01_15T18_26_37_0700 * minikube.k8s.io/version=v1.16.0 * node-role.kubernetes.io/control-plane= * node-role.kubernetes.io/master= * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock * node.alpha.kubernetes.io/ttl: 0 * volumes.kubernetes.io/controller-managed-attach-detach: true * CreationTimestamp: Fri, 15 Jan 2021 10:26:34 +0000 * Taints: * Unschedulable: false * Lease: * HolderIdentity: minikube * AcquireTime: * RenewTime: Mon, 18 Jan 2021 08:05:01 +0000 * Conditions: * Type Status LastHeartbeatTime LastTransitionTime Reason Message * ---- ------ ----------------- ------------------ ------ ------- * MemoryPressure False Mon, 18 Jan 2021 08:03:01 +0000 Fri, 15 Jan 2021 10:26:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available * DiskPressure False Mon, 18 Jan 2021 08:03:01 +0000 Fri, 15 Jan 2021 10:26:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure * PIDPressure False Mon, 18 Jan 2021 08:03:01 +0000 Fri, 15 Jan 2021 10:26:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available * Ready True Mon, 18 Jan 2021 08:03:01 +0000 Fri, 15 Jan 2021 10:26:53 +0000 KubeletReady kubelet is posting ready status * Addresses: * InternalIP: 192.168.49.2 * Hostname: minikube * Capacity: * cpu: 2 * ephemeral-storage: 51539404Ki * hugepages-1Gi: 0 * hugepages-2Mi: 0 * memory: 3871548Ki * pods: 110 * Allocatable: * cpu: 2 * ephemeral-storage: 51539404Ki * hugepages-1Gi: 0 * hugepages-2Mi: 0 * memory: 3871548Ki * pods: 110 * System Info: * Machine ID: dc5ffb5e6a2c450ba7188394726c0c91 * System UUID: e6d4d6c5-fdee-4772-95aa-46bbfa23d7e9 * Boot ID: da5a84d3-49e4-42b8-aecc-3a21665fd2b0 * Kernel Version: 4.18.0-80.el8.x86_64 * OS Image: Ubuntu 20.04.1 LTS * Operating System: linux * Architecture: amd64 * Container Runtime Version: docker://20.10.0 * Kubelet Version: v1.20.0 * Kube-Proxy Version: v1.20.0 * PodCIDR: 10.244.0.0/24 * PodCIDRs: 10.244.0.0/24 * Non-terminated Pods: (9 in total) * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE * --------- ---- ------------ ---------- --------------- ------------- --- * kube-system coredns-54d67798b7-s9dfj 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 2d21h * kube-system etcd-minikube 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 2d21h * kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 2d21h * kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 2d21h * kube-system kube-proxy-fpnhv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d21h * kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 2d21h * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d21h * kubernetes-dashboard dashboard-metrics-scraper-c95fcf479-xxbxs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d21h * kubernetes-dashboard kubernetes-dashboard-6cff4c7c4f-pgbxd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d21h * Allocated resources: * (Total limits may be over 100 percent, i.e., overcommitted.) * Resource Requests Limits * -------- -------- ------ * cpu 750m (37%) 0 (0%) * memory 170Mi (4%) 170Mi (4%) * ephemeral-storage 100Mi (0%) 0 (0%) * hugepages-1Gi 0 (0%) 0 (0%) * hugepages-2Mi 0 (0%) 0 (0%) * Events: * * ==> dmesg <== * [Dec 2 06:05] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. * [ +2.200079] systemd: 18 output lines suppressed due to ratelimiting * [ +0.621565] systemd[1]: Configuration file /usr/lib/systemd/system/qcloud-srv.service is marked executable. Please remove executable permission bits. Proceeding anyway. * [Jan15 08:24] systemd: 30 output lines suppressed due to ratelimiting * [ +0.082405] systemd[1]: Configuration file /usr/lib/systemd/system/qcloud-srv.service is marked executable. Please remove executable permission bits. Proceeding anyway. * [Jan15 08:25] systemd[1]: Configuration file /usr/lib/systemd/system/qcloud-srv.service is marked executable. Please remove executable permission bits. Proceeding anyway. * [ +26.411137] Microcode revisions 0xda and higher for Intel Skylake-H/S/Xeon E3 v5 (family 6, * model 94, stepping 3; CPUID 0x506e3) are disabled as they may cause system * instability; the previously published revision 0xd6 is used instead. * Please refer to /usr/share/doc/microcode_ctl/caveats/06-5e-03_readme * and /usr/share/doc/microcode_ctl/README.caveats for details. * [Jan15 08:49] vboxdrv: loading out-of-tree module taints kernel. * [ +0.007806] vboxdrv: fAsync=0 offMin=0x2b6 offMax=0x3a1e * [ +0.306125] VBoxNetFlt: Successfully started. * [ +0.002855] VBoxNetAdp: Successfully started. * * ==> etcd [c016f6456c12] <== * 2021-01-18 07:55:46.372176 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:55:56.372308 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:56:06.372215 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:56:16.372227 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:56:26.372178 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:56:36.270654 I | mvcc: store.index: compact 176394 * 2021-01-18 07:56:36.271202 I | mvcc: finished scheduled compaction at 176394 (took 370.067µs) * 2021-01-18 07:56:36.372300 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:56:46.372230 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:56:56.372209 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:57:06.372213 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:57:16.372348 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:57:26.372252 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:57:36.372261 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:57:46.372232 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:57:56.372242 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:58:06.372462 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:58:16.372193 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:58:26.372254 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:58:36.372163 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:58:46.372321 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:58:56.372192 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:59:06.372206 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:59:16.372310 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:59:26.372195 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:59:36.372264 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:59:46.372216 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 07:59:56.372249 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:00:06.372287 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:00:16.372229 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:00:26.372464 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:00:36.372451 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:00:46.372185 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:00:56.372314 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:01:06.372160 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:01:16.372426 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:01:26.372183 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:01:36.275657 I | mvcc: store.index: compact 176605 * 2021-01-18 08:01:36.276939 I | mvcc: finished scheduled compaction at 176605 (took 1.100467ms) * 2021-01-18 08:01:36.372204 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:01:46.372207 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:01:56.372181 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:02:06.372367 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:02:16.372202 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:02:26.372212 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:02:36.372151 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:02:46.372190 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:02:56.372282 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:03:06.372187 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:03:16.372144 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:03:26.372231 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:03:36.372388 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:03:46.372369 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:03:56.372234 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:04:06.372276 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:04:16.372144 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:04:26.372327 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:04:36.372318 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:04:46.372221 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-18 08:04:56.372153 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> kernel <== * 08:05:04 up 47 days, 1:59, 0 users, load average: 0.14, 0.32, 0.28 * Linux minikube 4.18.0-80.el8.x86_64 #1 SMP Tue Jun 4 09:19:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux * PRETTY_NAME="Ubuntu 20.04.1 LTS" * * ==> kube-apiserver [51b23309588f] <== * I0118 07:52:45.114623 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:52:45.114639 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:53:17.028961 1 client.go:360] parsed scheme: "passthrough" * I0118 07:53:17.029004 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:53:17.029011 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:53:47.893360 1 client.go:360] parsed scheme: "passthrough" * I0118 07:53:47.893397 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:53:47.893404 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:54:31.429019 1 client.go:360] parsed scheme: "passthrough" * I0118 07:54:31.429068 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:54:31.429076 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:55:13.070096 1 client.go:360] parsed scheme: "passthrough" * I0118 07:55:13.070132 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:55:13.070139 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:55:52.047703 1 client.go:360] parsed scheme: "passthrough" * I0118 07:55:52.047744 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:55:52.047752 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:56:22.580364 1 client.go:360] parsed scheme: "passthrough" * I0118 07:56:22.580435 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:56:22.580443 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:57:03.716708 1 client.go:360] parsed scheme: "passthrough" * I0118 07:57:03.716739 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:57:03.716746 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:57:42.532173 1 client.go:360] parsed scheme: "passthrough" * I0118 07:57:42.532213 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:57:42.532222 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:58:21.993738 1 client.go:360] parsed scheme: "passthrough" * I0118 07:58:21.993775 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:58:21.993782 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:59:00.439088 1 client.go:360] parsed scheme: "passthrough" * I0118 07:59:00.439134 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:59:00.439141 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 07:59:31.556351 1 client.go:360] parsed scheme: "passthrough" * I0118 07:59:31.556390 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 07:59:31.556397 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 08:00:13.394089 1 client.go:360] parsed scheme: "passthrough" * I0118 08:00:13.394124 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 08:00:13.394133 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 08:00:57.152929 1 client.go:360] parsed scheme: "passthrough" * I0118 08:00:57.152967 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 08:00:57.152975 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 08:01:39.137268 1 client.go:360] parsed scheme: "passthrough" * I0118 08:01:39.137307 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 08:01:39.137315 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 08:02:20.051889 1 client.go:360] parsed scheme: "passthrough" * I0118 08:02:20.051928 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 08:02:20.051936 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 08:02:54.077082 1 client.go:360] parsed scheme: "passthrough" * I0118 08:02:54.077121 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 08:02:54.077129 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 08:03:35.851909 1 client.go:360] parsed scheme: "passthrough" * I0118 08:03:35.851947 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 08:03:35.851955 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0118 08:04:08.161384 1 client.go:360] parsed scheme: "passthrough" * I0118 08:04:08.161438 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 08:04:08.161446 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * W0118 08:04:35.925701 1 watcher.go:220] watch chan error: etcdserver: mvcc: required revision has been compacted * I0118 08:04:45.391587 1 client.go:360] parsed scheme: "passthrough" * I0118 08:04:45.391627 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0118 08:04:45.391643 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-controller-manager [26d88b9838b5] <== * I0115 10:26:53.175619 1 shared_informer.go:247] Caches are synced for HPA * I0115 10:26:53.194889 1 shared_informer.go:247] Caches are synced for job * I0115 10:26:53.202291 1 shared_informer.go:247] Caches are synced for bootstrap_signer * I0115 10:26:53.202454 1 shared_informer.go:247] Caches are synced for expand * I0115 10:26:53.202576 1 shared_informer.go:247] Caches are synced for PV protection * I0115 10:26:53.213668 1 shared_informer.go:247] Caches are synced for ReplicationController * I0115 10:26:53.252028 1 shared_informer.go:247] Caches are synced for disruption * I0115 10:26:53.252049 1 disruption.go:339] Sending events to api server. * I0115 10:26:53.252234 1 shared_informer.go:247] Caches are synced for stateful set * I0115 10:26:53.252247 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator * I0115 10:26:53.252235 1 shared_informer.go:247] Caches are synced for crt configmap * I0115 10:26:53.252398 1 shared_informer.go:247] Caches are synced for PVC protection * E0115 10:26:53.263236 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again * I0115 10:26:53.290888 1 shared_informer.go:247] Caches are synced for deployment * I0115 10:26:53.297654 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-c95fcf479 to 1" * I0115 10:26:53.302282 1 shared_informer.go:247] Caches are synced for certificate-csrapproving * I0115 10:26:53.302868 1 shared_informer.go:247] Caches are synced for ReplicaSet * I0115 10:26:53.304102 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-54d67798b7 to 1" * I0115 10:26:53.304120 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6cff4c7c4f to 1" * I0115 10:26:53.305554 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown * I0115 10:26:53.305699 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving * I0115 10:26:53.306092 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client * I0115 10:26:53.306103 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client * I0115 10:26:53.325188 1 event.go:291] "Event occurred" object="kube-system/coredns-54d67798b7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-54d67798b7-s9dfj" * I0115 10:26:53.351704 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6cff4c7c4f-pgbxd" * I0115 10:26:53.351722 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c95fcf479-xxbxs" * W0115 10:26:53.360886 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0115 10:26:53.374008 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring * I0115 10:26:53.402250 1 shared_informer.go:247] Caches are synced for endpoint_slice * I0115 10:26:53.402745 1 shared_informer.go:247] Caches are synced for attach detach * I0115 10:26:53.402859 1 shared_informer.go:247] Caches are synced for endpoint * I0115 10:26:53.403228 1 shared_informer.go:247] Caches are synced for TTL * I0115 10:26:53.404515 1 shared_informer.go:247] Caches are synced for resource quota * I0115 10:26:53.420791 1 shared_informer.go:247] Caches are synced for daemon sets * I0115 10:26:53.430167 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fpnhv" * I0115 10:26:53.430238 1 shared_informer.go:247] Caches are synced for persistent volume * I0115 10:26:53.450088 1 shared_informer.go:247] Caches are synced for node * I0115 10:26:53.450109 1 range_allocator.go:172] Starting range CIDR allocator * I0115 10:26:53.450113 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator * I0115 10:26:53.450116 1 shared_informer.go:247] Caches are synced for cidrallocator * I0115 10:26:53.452444 1 shared_informer.go:247] Caches are synced for GC * I0115 10:26:53.455262 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24] * I0115 10:26:53.457495 1 shared_informer.go:247] Caches are synced for taint * I0115 10:26:53.457547 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: * W0115 10:26:53.457581 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0115 10:26:53.457609 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. * I0115 10:26:53.457704 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0115 10:26:53.457935 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" * I0115 10:26:53.465852 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" * I0115 10:26:53.502340 1 shared_informer.go:247] Caches are synced for service account * I0115 10:26:53.507936 1 shared_informer.go:247] Caches are synced for namespace * I0115 10:26:53.611897 1 shared_informer.go:240] Waiting for caches to sync for garbage collector * I0115 10:26:53.752309 1 request.go:655] Throttling request took 1.049097481s, request: GET:https://192.168.49.2:8443/apis/discovery.k8s.io/v1beta1?timeout=32s * I0115 10:26:53.903303 1 shared_informer.go:247] Caches are synced for garbage collector * I0115 10:26:53.903325 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0115 10:26:53.911995 1 shared_informer.go:247] Caches are synced for garbage collector * I0115 10:26:54.553504 1 shared_informer.go:240] Waiting for caches to sync for resource quota * I0115 10:26:54.553535 1 shared_informer.go:247] Caches are synced for resource quota * I0115 10:26:58.457827 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. * I0115 12:26:51.652541 1 cleaner.go:180] Cleaning CSR "csr-r4jb4" as it is more than 1h0m0s old and approved. * * ==> kube-proxy [eef77a078b28] <== * I0115 10:26:54.173235 1 node.go:172] Successfully retrieved node IP: 192.168.49.2 * I0115 10:26:54.173281 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation * W0115 10:26:54.209709 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy * I0115 10:26:54.209801 1 server_others.go:185] Using iptables Proxier. * I0115 10:26:54.210016 1 server.go:650] Version: v1.20.0 * I0115 10:26:54.210274 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * E0115 10:26:54.210572 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime]) * I0115 10:26:54.210631 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0115 10:26:54.210656 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0115 10:26:54.211203 1 config.go:315] Starting service config controller * I0115 10:26:54.211210 1 shared_informer.go:240] Waiting for caches to sync for service config * I0115 10:26:54.211226 1 config.go:224] Starting endpoint slice config controller * I0115 10:26:54.211229 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config * I0115 10:26:54.311348 1 shared_informer.go:247] Caches are synced for endpoint slice config * I0115 10:26:54.311353 1 shared_informer.go:247] Caches are synced for service config * * ==> kube-scheduler [dae83368cb9c] <== * I0115 10:26:30.266076 1 serving.go:331] Generated self-signed cert in-memory * W0115 10:26:34.547105 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0115 10:26:34.547127 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0115 10:26:34.547135 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous. * W0115 10:26:34.547140 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0115 10:26:34.584141 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 * I0115 10:26:34.584251 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0115 10:26:34.584257 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0115 10:26:34.584268 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0115 10:26:34.599949 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0115 10:26:34.600149 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0115 10:26:34.600216 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope * E0115 10:26:34.600270 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0115 10:26:34.600324 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0115 10:26:34.600390 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0115 10:26:34.600456 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0115 10:26:34.600500 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0115 10:26:34.600607 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0115 10:26:34.601540 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope * E0115 10:26:34.601679 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope * E0115 10:26:34.602017 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0115 10:26:35.475599 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0115 10:26:35.588555 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope * E0115 10:26:35.638015 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0115 10:26:35.670990 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0115 10:26:35.688037 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0115 10:26:35.704035 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope * I0115 10:26:36.084326 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Fri 2021-01-15 10:25:52 UTC, end at Mon 2021-01-18 08:05:05 UTC. -- * Jan 15 10:26:43 minikube kubelet[2674]: I0115 10:26:43.986542 2674 kubelet_node_status.go:109] Node minikube was previously registered * Jan 15 10:26:43 minikube kubelet[2674]: I0115 10:26:43.986615 2674 kubelet_node_status.go:74] Successfully registered node minikube * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.046159 2674 cpu_manager.go:193] [cpumanager] starting with none policy * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.046178 2674 cpu_manager.go:194] [cpumanager] reconciling every 10s * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.046193 2674 state_mem.go:36] [cpumanager] initializing new in-memory state store * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.046912 2674 state_mem.go:88] [cpumanager] updated default cpuset: "" * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.046927 2674 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]" * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.046935 2674 policy_none.go:43] [cpumanager] none policy: Start * Jan 15 10:26:44 minikube kubelet[2674]: W0115 10:26:44.047992 2674 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.048187 2674 plugin_manager.go:114] Starting Kubelet Plugin Manager * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.186477 2674 topology_manager.go:187] [topologymanager] Topology Admit Handler * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.186585 2674 topology_manager.go:187] [topologymanager] Topology Admit Handler * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.186617 2674 topology_manager.go:187] [topologymanager] Topology Admit Handler * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.186655 2674 topology_manager.go:187] [topologymanager] Topology Admit Handler * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238324 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/64b70ee7f1fb816589811c57662f37af-kubeconfig") pod "kube-controller-manager-minikube" (UID: "64b70ee7f1fb816589811c57662f37af") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238357 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/64b70ee7f1fb816589811c57662f37af-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "64b70ee7f1fb816589811c57662f37af") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238373 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9f7f9b7a27a05b6756f56cc68c0c0230-kubeconfig") pod "kube-scheduler-minikube" (UID: "9f7f9b7a27a05b6756f56cc68c0c0230") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238391 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/c6c9fc9519abfb476a81bedee6c0b4f7-etcd-certs") pod "etcd-minikube" (UID: "c6c9fc9519abfb476a81bedee6c0b4f7") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238404 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/c6c9fc9519abfb476a81bedee6c0b4f7-etcd-data") pod "etcd-minikube" (UID: "c6c9fc9519abfb476a81bedee6c0b4f7") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238436 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/bba666b21226004106d6ddd5bba3dd72-k8s-certs") pod "kube-apiserver-minikube" (UID: "bba666b21226004106d6ddd5bba3dd72") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238451 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/bba666b21226004106d6ddd5bba3dd72-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "bba666b21226004106d6ddd5bba3dd72") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238464 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/bba666b21226004106d6ddd5bba3dd72-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "bba666b21226004106d6ddd5bba3dd72") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238477 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/64b70ee7f1fb816589811c57662f37af-ca-certs") pod "kube-controller-manager-minikube" (UID: "64b70ee7f1fb816589811c57662f37af") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238489 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/64b70ee7f1fb816589811c57662f37af-k8s-certs") pod "kube-controller-manager-minikube" (UID: "64b70ee7f1fb816589811c57662f37af") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238501 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/bba666b21226004106d6ddd5bba3dd72-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "bba666b21226004106d6ddd5bba3dd72") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238513 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/64b70ee7f1fb816589811c57662f37af-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "64b70ee7f1fb816589811c57662f37af") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238527 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/64b70ee7f1fb816589811c57662f37af-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "64b70ee7f1fb816589811c57662f37af") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238539 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/bba666b21226004106d6ddd5bba3dd72-ca-certs") pod "kube-apiserver-minikube" (UID: "bba666b21226004106d6ddd5bba3dd72") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238551 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/64b70ee7f1fb816589811c57662f37af-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "64b70ee7f1fb816589811c57662f37af") * Jan 15 10:26:44 minikube kubelet[2674]: I0115 10:26:44.238557 2674 reconciler.go:157] Reconciler: start to sync state * Jan 15 10:26:53 minikube kubelet[2674]: I0115 10:26:53.435880 2674 topology_manager.go:187] [topologymanager] Topology Admit Handler * Jan 15 10:26:53 minikube kubelet[2674]: I0115 10:26:53.455129 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/1e991f1c-3fcd-41ff-b638-c07618ace2c8-kube-proxy") pod "kube-proxy-fpnhv" (UID: "1e991f1c-3fcd-41ff-b638-c07618ace2c8") * Jan 15 10:26:53 minikube kubelet[2674]: I0115 10:26:53.455156 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/1e991f1c-3fcd-41ff-b638-c07618ace2c8-xtables-lock") pod "kube-proxy-fpnhv" (UID: "1e991f1c-3fcd-41ff-b638-c07618ace2c8") * Jan 15 10:26:53 minikube kubelet[2674]: I0115 10:26:53.455172 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/1e991f1c-3fcd-41ff-b638-c07618ace2c8-lib-modules") pod "kube-proxy-fpnhv" (UID: "1e991f1c-3fcd-41ff-b638-c07618ace2c8") * Jan 15 10:26:53 minikube kubelet[2674]: I0115 10:26:53.455190 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-cd6bc" (UniqueName: "kubernetes.io/secret/1e991f1c-3fcd-41ff-b638-c07618ace2c8-kube-proxy-token-cd6bc") pod "kube-proxy-fpnhv" (UID: "1e991f1c-3fcd-41ff-b638-c07618ace2c8") * Jan 15 10:26:53 minikube kubelet[2674]: I0115 10:26:53.549171 2674 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24 * Jan 15 10:26:53 minikube kubelet[2674]: I0115 10:26:53.549366 2674 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},} * Jan 15 10:26:53 minikube kubelet[2674]: I0115 10:26:53.549487 2674 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24 * Jan 15 10:26:55 minikube kubelet[2674]: I0115 10:26:55.893777 2674 topology_manager.go:187] [topologymanager] Topology Admit Handler * Jan 15 10:26:55 minikube kubelet[2674]: I0115 10:26:55.907274 2674 topology_manager.go:187] [topologymanager] Topology Admit Handler * Jan 15 10:26:55 minikube kubelet[2674]: I0115 10:26:55.907358 2674 topology_manager.go:187] [topologymanager] Topology Admit Handler * Jan 15 10:26:55 minikube kubelet[2674]: I0115 10:26:55.961336 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/71b78497-6aec-4369-9913-ce49ee0544cd-tmp-volume") pod "kubernetes-dashboard-6cff4c7c4f-pgbxd" (UID: "71b78497-6aec-4369-9913-ce49ee0544cd") * Jan 15 10:26:55 minikube kubelet[2674]: I0115 10:26:55.961369 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-cl98k" (UniqueName: "kubernetes.io/secret/71b78497-6aec-4369-9913-ce49ee0544cd-kubernetes-dashboard-token-cl98k") pod "kubernetes-dashboard-6cff4c7c4f-pgbxd" (UID: "71b78497-6aec-4369-9913-ce49ee0544cd") * Jan 15 10:26:55 minikube kubelet[2674]: I0115 10:26:55.961388 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-cl98k" (UniqueName: "kubernetes.io/secret/ad7d3c54-fc6d-455f-aeb3-769e284f8b95-kubernetes-dashboard-token-cl98k") pod "dashboard-metrics-scraper-c95fcf479-xxbxs" (UID: "ad7d3c54-fc6d-455f-aeb3-769e284f8b95") * Jan 15 10:26:55 minikube kubelet[2674]: I0115 10:26:55.961403 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-hl6qj" (UniqueName: "kubernetes.io/secret/87f898ff-48cc-423b-b5c7-195f4bedf374-coredns-token-hl6qj") pod "coredns-54d67798b7-s9dfj" (UID: "87f898ff-48cc-423b-b5c7-195f4bedf374") * Jan 15 10:26:55 minikube kubelet[2674]: I0115 10:26:55.961455 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/87f898ff-48cc-423b-b5c7-195f4bedf374-config-volume") pod "coredns-54d67798b7-s9dfj" (UID: "87f898ff-48cc-423b-b5c7-195f4bedf374") * Jan 15 10:26:55 minikube kubelet[2674]: I0115 10:26:55.961473 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/ad7d3c54-fc6d-455f-aeb3-769e284f8b95-tmp-volume") pod "dashboard-metrics-scraper-c95fcf479-xxbxs" (UID: "ad7d3c54-fc6d-455f-aeb3-769e284f8b95") * Jan 15 10:26:56 minikube kubelet[2674]: W0115 10:26:56.753695 2674 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-54d67798b7-s9dfj through plugin: invalid network status for * Jan 15 10:26:56 minikube kubelet[2674]: W0115 10:26:56.907093 2674 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f-pgbxd through plugin: invalid network status for * Jan 15 10:26:56 minikube kubelet[2674]: W0115 10:26:56.939238 2674 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-xxbxs through plugin: invalid network status for * Jan 15 10:26:56 minikube kubelet[2674]: W0115 10:26:56.967212 2674 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-xxbxs through plugin: invalid network status for * Jan 15 10:26:56 minikube kubelet[2674]: W0115 10:26:56.969137 2674 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-54d67798b7-s9dfj through plugin: invalid network status for * Jan 15 10:26:57 minikube kubelet[2674]: W0115 10:26:57.004840 2674 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f-pgbxd through plugin: invalid network status for * Jan 15 10:26:58 minikube kubelet[2674]: W0115 10:26:58.010468 2674 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-54d67798b7-s9dfj through plugin: invalid network status for * Jan 15 10:27:01 minikube kubelet[2674]: I0115 10:27:01.901870 2674 topology_manager.go:187] [topologymanager] Topology Admit Handler * Jan 15 10:27:01 minikube kubelet[2674]: I0115 10:27:01.970969 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-l7476" (UniqueName: "kubernetes.io/secret/4679b877-ffc8-4761-b3ae-eeeb06e168cd-storage-provisioner-token-l7476") pod "storage-provisioner" (UID: "4679b877-ffc8-4761-b3ae-eeeb06e168cd") * Jan 15 10:27:01 minikube kubelet[2674]: I0115 10:27:01.971001 2674 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/4679b877-ffc8-4761-b3ae-eeeb06e168cd-tmp") pod "storage-provisioner" (UID: "4679b877-ffc8-4761-b3ae-eeeb06e168cd") * Jan 15 10:27:05 minikube kubelet[2674]: W0115 10:27:05.041106 2674 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f-pgbxd through plugin: invalid network status for * Jan 15 10:27:14 minikube kubelet[2674]: W0115 10:27:14.079852 2674 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-xxbxs through plugin: invalid network status for * Jan 15 10:27:15 minikube kubelet[2674]: W0115 10:27:15.250905 2674 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-xxbxs through plugin: invalid network status for * * ==> kubernetes-dashboard [25cc7c57ad68] <== * 2021/01/15 10:27:04 Using namespace: kubernetes-dashboard * 2021/01/15 10:27:04 Using in-cluster config to connect to apiserver * 2021/01/15 10:27:04 Using secret token for csrf signing * 2021/01/15 10:27:04 Initializing csrf token from kubernetes-dashboard-csrf secret * 2021/01/15 10:27:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf * 2021/01/15 10:27:04 Successful initial request to the apiserver, version: v1.20.0 * 2021/01/15 10:27:04 Generating JWE encryption key * 2021/01/15 10:27:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting * 2021/01/15 10:27:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard * 2021/01/15 10:27:04 Initializing JWE encryption key from synchronized object * 2021/01/15 10:27:04 Creating in-cluster Sidecar client * 2021/01/15 10:27:04 Serving insecurely on HTTP port: 9090 * 2021/01/15 10:27:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. * 2021/01/15 10:27:34 Successful request to sidecar * 2021/01/15 10:27:04 Starting overwatch * * ==> storage-provisioner [604ac801b4c5] <== * I0115 10:27:02.654829 1 storage_provisioner.go:115] Initializing the minikube storage provisioner... * I0115 10:27:02.662359 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service! * I0115 10:27:02.662393 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... * I0115 10:27:02.675635 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath * I0115 10:27:02.676119 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_ee2989a7-89fd-41c4-bce6-8306b17c966c! * I0115 10:27:02.676151 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c6af455-f7b4-4dc2-8237-802c8851b1c3", APIVersion:"v1", ResourceVersion:"514", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_ee2989a7-89fd-41c4-bce6-8306b17c966c became leader * I0115 10:27:02.776286 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_ee2989a7-89fd-41c4-bce6-8306b17c966c!

I guess perhaps I does not have a browser installed,so i had ran yum install firefox,however,But I still get the error the same as above,Any suggestions or guidelines will be appreciated,Thanks in advance.

afbjorklund commented 3 years ago

You also need to set your preferred web browser, before xdg-open will work correctly.

xdg-open 'http://www.freedesktop.org/'

Normally this is done with your desktop environment, but you can also set $BROWSER...

I thought that firefox would be included in the default list (when having a display), though ?

afbjorklund commented 3 years ago

It seems that the likely issue is that you are running this from a text terminal, not X11. I don't think the kubernetes dashboard works very well, in such a CLI environment...

You can use minikube dashboard --url, and then set up a tunnel using ssh. Or find another dashboard, that works in text mode. Like "k1s" or "k9s" or so ?

minicloudsky commented 3 years ago

It seems that the likely issue is that you are running this from a text terminal, not X11. I don't think the kubernetes dashboard works very well, in such a CLI environment...

You can use minikube dashboard --url, and then set up a tunnel using ssh. Or find another dashboard, that works in text mode. Like "k1s" or "k9s" or so ?

thanks,I will try k9s or rancher to instead

medyagh commented 3 years ago

I am curious, @minicloudsky any luck with k1s or k9s ? if yes we could add this as a suggestion to our users.

medyagh commented 3 years ago

/triage needs-information /kind support

minicloudsky commented 3 years ago

I am curious, @minicloudsky any luck with k1s or k9s ? if yes we could add this as a suggestion to our users.

yes,i am interested in k8s and i had tried rancher,kuboard,kubesphere~ any suggestions will be appreciated

medyagh commented 3 years ago

for terminal Brwoser of k8s I suggest using k9s https://github.com/derailed/k9s I close this issue for now as it is not related to minikube