kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.47k stars 4.89k forks source link

Unable to access Minikube dashboard from Local machine without GUI #10592

Closed vcashadoop closed 3 years ago

vcashadoop commented 3 years ago

Steps to reproduce the issue:

1.Installed Oracle VM in my laptop, and jave installed minikube in the CENTOS 7 running in the VM

  1. Installed the minikube 3.and then ran the command minikube dashboard --url=true
  1. Run kubectl proxy Starting to serve on 127.0.0.1:8001

when I try to access with the above url like http://127.0.0.1:8001 or 80 not abl top open in my local browser.

Please let me know how to access this in my windows browser

ilya-zuyev commented 3 years ago

Hi @vcashadoop! Thanks for reporting this. Could you provide logs of minikube start -v=5 --alsologtostderr?

afbjorklund commented 3 years ago

If you run minikube in a VM, you need to run the browser in the VM.

vcashadoop commented 3 years ago

minikube start -v=5 --alsologtostderr

Hi @ilya-zuyev , please find the logs as below. minikube start -v=5 --alsologtostderr I0224 23:13:16.123627 169126 out.go:229] Setting OutFile to fd 1 ... I0224 23:13:16.127685 169126 out.go:276] TERM=xterm,COLORTERM=, which probably does not support color I0224 23:13:16.127710 169126 out.go:242] Setting ErrFile to fd 2... I0224 23:13:16.127720 169126 out.go:276] TERM=xterm,COLORTERM=, which probably does not support color I0224 23:13:16.128021 169126 root.go:291] Updating PATH: /home/vikas/.minikube/ bin W0224 23:13:16.128647 169126 root.go:266] Error reading config file at /home/vi kas/.minikube/config/config.json: open /home/vikas/.minikube/config/config.json: no such file or directory I0224 23:13:16.129827 169126 out.go:236] Setting JSON to false I0224 23:13:16.148798 169126 start.go:106] hostinfo: {"hostname":"localhost.loc aldomain","uptime":31571,"bootTime":1614157025,"procs":119,"os":"linux","platfor m":"centos","platformFamily":"rhel","platformVersion":"7.9.2009","kernelVersion" :"3.10.0-1062.el7.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtu alizationRole":"","hostId":"35ce9ced-c0c6-a746-91db-a2196f30bf3f"} I0224 23:13:16.148994 169126 start.go:116] virtualization: I0224 23:13:16.170594 169126 out.go:119] * minikube v1.17.1 on Centos 7.9.2009

I0224 23:13:25.402728 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0224 23:13:25.613158 169126 main.go:119] libmachine: Using SSH client type: native I0224 23:13:25.613673 169126 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7f4aa0] 0x7f4a60 [] 0s} 127.0.0.1 49168 } I0224 23:13:25.613718 169126 main.go:119] libmachine: About to run SSH command:

            if ! grep -xq '.*\sminikube' /etc/hosts; then
                    if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                            sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                    else
                            echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
                    fi
            fi

I0224 23:13:25.960660 169126 main.go:119] libmachine: SSH cmd err, output: : I0224 23:13:25.965520 169126 ubuntu.go:175] set auth options {CertDir:/home/vikas/.minikube CaCertPath:/home/vikas/.minikube/certs/ca.pem CaPrivateKeyPath:/home/vikas/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/vikas/.minikube/machines/server.pem ServerKeyPath:/home/vikas/.minikube/machines/server-key.pem ClientKeyPath:/home/vikas/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/vikas/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/vikas/.minikube} I0224 23:13:25.965612 169126 ubuntu.go:177] setting up certificates I0224 23:13:25.965631 169126 provision.go:83] configureAuth start I0224 23:13:25.965732 169126 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0224 23:13:26.164804 169126 provision.go:137] copyHostCerts I0224 23:13:26.164864 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/certs/key.pem -> /home/vikas/.minikube/key.pem I0224 23:13:26.164925 169126 exec_runner.go:145] found /home/vikas/.minikube/key.pem, removing ... I0224 23:13:26.164941 169126 exec_runner.go:190] rm: /home/vikas/.minikube/key.pem I0224 23:13:26.165076 169126 exec_runner.go:152] cp: /home/vikas/.minikube/certs/key.pem --> /home/vikas/.minikube/key.pem (1675 bytes) I0224 23:13:26.165652 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/certs/ca.pem -> /home/vikas/.minikube/ca.pem I0224 23:13:26.165710 169126 exec_runner.go:145] found /home/vikas/.minikube/ca.pem, removing ... I0224 23:13:26.165733 169126 exec_runner.go:190] rm: /home/vikas/.minikube/ca.pem I0224 23:13:26.165819 169126 exec_runner.go:152] cp: /home/vikas/.minikube/certs/ca.pem --> /home/vikas/.minikube/ca.pem (1074 bytes) I0224 23:13:26.168797 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/certs/cert.pem -> /home/vikas/.minikube/cert.pem I0224 23:13:26.168874 169126 exec_runner.go:145] found /home/vikas/.minikube/cert.pem, removing ... I0224 23:13:26.168887 169126 exec_runner.go:190] rm: /home/vikas/.minikube/cert.pem I0224 23:13:26.169483 169126 exec_runner.go:152] cp: /home/vikas/.minikube/certs/cert.pem --> /home/vikas/.minikube/cert.pem (1119 bytes) I0224 23:13:26.170048 169126 provision.go:111] generating server cert: /home/vikas/.minikube/machines/server.pem ca-key=/home/vikas/.minikube/certs/ca.pem private-key=/home/vikas/.minikube/certs/ca-key.pem org=vikas.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0224 23:13:27.227333 169126 provision.go:165] copyRemoteCerts I0224 23:13:27.230825 169126 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0224 23:13:27.231101 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0224 23:13:27.434446 169126 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:49168 SSHKeyPath:/home/vikas/.minikube/machines/minikube/id_rsa Username:docker} I0224 23:13:27.599042 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/certs/ca.pem -> /etc/docker/ca.pem I0224 23:13:27.599134 169126 ssh_runner.go:310] scp /home/vikas/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes) I0224 23:13:27.690772 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/machines/server.pem -> /etc/docker/server.pem I0224 23:13:27.691268 169126 ssh_runner.go:310] scp /home/vikas/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes) I0224 23:13:27.783308 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem I0224 23:13:27.783997 169126 ssh_runner.go:310] scp /home/vikas/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0224 23:13:27.879474 169126 provision.go:86] duration metric: configureAuth took 1.913811285s I0224 23:13:27.879513 169126 ubuntu.go:193] setting minikube options for container-runtime I0224 23:13:27.879829 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0224 23:13:28.066151 169126 main.go:119] libmachine: Using SSH client type: native I0224 23:13:28.066466 169126 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7f4aa0] 0x7f4a60 [] 0s} 127.0.0.1 49168 } I0224 23:13:28.066490 169126 main.go:119] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0224 23:13:28.332395 169126 main.go:119] libmachine: SSH cmd err, output: : overlay

I0224 23:13:28.332462 169126 ubuntu.go:71] root file system type: overlay I0224 23:13:28.333275 169126 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ... I0224 23:13:28.333675 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0224 23:13:28.537872 169126 main.go:119] libmachine: Using SSH client type: native I0224 23:13:28.538140 169126 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7f4aa0] 0x7f4a60 [] 0s} 127.0.0.1 49168 } I0224 23:13:28.538277 169126 main.go:119] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket

[Service] Type=notify Restart=on-failure StartLimitBurst=3 StartLimitIntervalSec=60

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0224 23:13:28.802579 169126 main.go:119] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket

[Service] Type=notify Restart=on-failure StartLimitBurst=3 StartLimitIntervalSec=60

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target

I0224 23:13:28.802756 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0224 23:13:28.985582 169126 main.go:119] libmachine: Using SSH client type: native I0224 23:13:28.986773 169126 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x7f4aa0] 0x7f4a60 [] 0s} 127.0.0.1 49168 } I0224 23:13:28.987398 169126 main.go:119] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0224 23:13:29.241558 169126 main.go:119] libmachine: SSH cmd err, output: : I0224 23:13:29.241596 169126 machine.go:91] provisioned docker machine in 7.444715207s I0224 23:13:29.241614 169126 start.go:267] post-start starting for "minikube" (driver="docker") I0224 23:13:29.241625 169126 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0224 23:13:29.241721 169126 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0224 23:13:29.242178 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0224 23:13:29.401556 169126 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:49168 SSHKeyPath:/home/vikas/.minikube/machines/minikube/id_rsa Username:docker} I0224 23:13:29.537956 169126 ssh_runner.go:149] Run: cat /etc/os-release I0224 23:13:29.551791 169126 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0224 23:13:29.551842 169126 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0224 23:13:29.551859 169126 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0224 23:13:29.551870 169126 info.go:137] Remote host: Ubuntu 20.04.1 LTS I0224 23:13:29.551891 169126 filesync.go:118] Scanning /home/vikas/.minikube/addons for local assets ... I0224 23:13:29.551974 169126 filesync.go:118] Scanning /home/vikas/.minikube/files for local assets ... I0224 23:13:29.552019 169126 start.go:270] post-start completed in 310.393291ms I0224 23:13:29.555898 169126 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0224 23:13:29.556006 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0224 23:13:29.708075 169126 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:49168 SSHKeyPath:/home/vikas/.minikube/machines/minikube/id_rsa Username:docker} I0224 23:13:29.833376 169126 fix.go:56] fixHost completed within 12.184274287s I0224 23:13:29.833410 169126 start.go:80] releasing machines lock for "minikube", held for 12.184351429s I0224 23:13:29.833928 169126 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0224 23:13:29.994488 169126 ssh_runner.go:149] Run: systemctl --version I0224 23:13:29.994564 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0224 23:13:29.996473 169126 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0224 23:13:29.996583 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0224 23:13:30.129500 169126 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:49168 SSHKeyPath:/home/vikas/.minikube/machines/minikube/id_rsa Username:docker} I0224 23:13:30.136413 169126 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:49168 SSHKeyPath:/home/vikas/.minikube/machines/minikube/id_rsa Username:docker} I0224 23:13:30.270199 169126 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd I0224 23:13:30.675608 169126 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0224 23:13:30.707396 169126 cruntime.go:200] skipping containerd shutdown because we are bound to it I0224 23:13:30.707838 169126 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0224 23:13:30.743064 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0224 23:13:30.801303 169126 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0224 23:13:30.829438 169126 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0224 23:13:31.307225 169126 ssh_runner.go:149] Run: sudo systemctl start docker I0224 23:13:31.360601 169126 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} I0224 23:13:31.667957 169126 out.go:140] * Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...

-- /stdout -- I0224 23:13:32.124956 169126 docker.go:326] Images already preloaded, skipping extraction I0224 23:13:32.125063 169126 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} / I0224 23:13:32.290710 169126 docker.go:389] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 gcr.io/k8s-minikube/storage-provisioner:v4 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 kubernetesui/dashboard:v2.0.0-beta8 kubernetesui/metrics-scraper:v1.0.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

-- /stdout -- I0224 23:13:32.290757 169126 cache_images.go:73] Images are preloaded, skipping loading I0224 23:13:32.290852 169126 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} - I0224 23:13:32.763309 169126 cni.go:74] Creating CNI manager for "" I0224 23:13:32.763354 169126 cni.go:139] CNI unnecessary in this configuration, recommending no CNI I0224 23:13:32.763873 169126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0224 23:13:32.763929 169126 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0224 23:13:32.764148 169126 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens:

I0224 23:13:32.764875 169126 kubeadm.go:868] kubelet [Unit] Wants=docker.socket

[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install] config: {KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0224 23:13:32.765049 169126 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0224 23:13:32.810154 169126 binaries.go:44] Found k8s binaries, skipping transfer I0224 23:13:32.810258 169126 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube \ I0224 23:13:32.859826 169126 ssh_runner.go:310] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) | I0224 23:13:32.949692 169126 ssh_runner.go:310] scp memory --> /lib/systemd/system/kubelet.service (349 bytes) I0224 23:13:33.019709 169126 ssh_runner.go:310] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1845 bytes) / I0224 23:13:33.099782 169126 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0224 23:13:33.111787 169126 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" - I0224 23:13:33.164861 169126 certs.go:52] Setting up /home/vikas/.minikube/profiles/minikube for IP: 192.168.49.2 I0224 23:13:33.164970 169126 certs.go:171] skipping minikubeCA CA generation: /home/vikas/.minikube/ca.key I0224 23:13:33.165007 169126 certs.go:171] skipping proxyClientCA CA generation: /home/vikas/.minikube/proxy-client-ca.key I0224 23:13:33.165102 169126 certs.go:275] skipping minikube-user signed cert generation: /home/vikas/.minikube/profiles/minikube/client.key I0224 23:13:33.165137 169126 certs.go:275] skipping minikube signed cert generation: /home/vikas/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0224 23:13:33.165175 169126 certs.go:275] skipping aggregator signed cert generation: /home/vikas/.minikube/profiles/minikube/proxy-client.key I0224 23:13:33.165192 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt I0224 23:13:33.165217 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key I0224 23:13:33.165239 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt I0224 23:13:33.165261 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key I0224 23:13:33.165281 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt I0224 23:13:33.165301 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/ca.key -> /var/lib/minikube/certs/ca.key I0224 23:13:33.165323 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt I0224 23:13:33.165346 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key I0224 23:13:33.165453 169126 certs.go:354] found cert: /home/vikas/.minikube/certs/home/vikas/.minikube/certs/ca-key.pem (1675 bytes) I0224 23:13:33.165527 169126 certs.go:354] found cert: /home/vikas/.minikube/certs/home/vikas/.minikube/certs/ca.pem (1074 bytes) I0224 23:13:33.165584 169126 certs.go:354] found cert: /home/vikas/.minikube/certs/home/vikas/.minikube/certs/cert.pem (1119 bytes) I0224 23:13:33.165631 169126 certs.go:354] found cert: /home/vikas/.minikube/certs/home/vikas/.minikube/certs/key.pem (1675 bytes) I0224 23:13:33.165682 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem I0224 23:13:33.167781 169126 ssh_runner.go:310] scp /home/vikas/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) \ I0224 23:13:33.256351 169126 ssh_runner.go:310] scp /home/vikas/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) | I0224 23:13:33.347918 169126 ssh_runner.go:310] scp /home/vikas/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) / I0224 23:13:33.448274 169126 ssh_runner.go:310] scp /home/vikas/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) - I0224 23:13:33.540491 169126 ssh_runner.go:310] scp /home/vikas/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0224 23:13:33.630013 169126 ssh_runner.go:310] scp /home/vikas/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) \ I0224 23:13:33.734659 169126 ssh_runner.go:310] scp /home/vikas/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) | I0224 23:13:33.823828 169126 ssh_runner.go:310] scp /home/vikas/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) / I0224 23:13:33.909719 169126 ssh_runner.go:310] scp /home/vikas/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) - I0224 23:13:33.998509 169126 ssh_runner.go:310] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) \ I0224 23:13:34.062767 169126 ssh_runner.go:149] Run: openssl version I0224 23:13:34.096545 169126 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0224 23:13:34.142840 169126 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem | I0224 23:13:34.163966 169126 certs.go:395] hashing: -rw-r--r--. 1 root root 1111 Feb 24 11:03 /usr/share/ca-certificates/minikubeCA.pem I0224 23:13:34.164200 169126 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0224 23:13:34.191166 169126 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0224 23:13:34.229148 169126 kubeadm.go:370] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0224 23:13:34.230835 169126 sshrunner.go:149] Run: docker ps --filter status=paused --filter=name=k8s.*(kube-system) --format={{.ID}} - I0224 23:13:34.407025 169126 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0224 23:13:34.450990 169126 kubeadm.go:381] found existing configuration files, will attempt cluster restart I0224 23:13:34.451028 169126 kubeadm.go:573] restartCluster start I0224 23:13:34.451101 169126 ssh_runner.go:149] Run: sudo test -d /data/minikube \ I0224 23:13:34.492273 169126 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout:

stderr: I0224 23:13:34.493943 169126 kubeconfig.go:117] verify returned: extract IP: "minikube" does not appear in /home/vikas/.kube/config I0224 23:13:34.494159 169126 kubeconfig.go:128] "minikube" context is missing from /home/vikas/.kube/config - will repair! I0224 23:13:34.494784 169126 lock.go:36] WriteFile acquiring /home/vikas/.kube/config: {Name:mk242a5823dc193988354b0e26363bce478889ed Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0224 23:13:34.508564 169126 kapi.go:59] client config for minikube: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/vikas/.minikube/profiles/minikube/client.crt", KeyFile:"/home/vikas/.minikube/profiles/minikube/client.key", CAFile:"/home/vikas/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x176a3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)} I0224 23:13:34.512206 169126 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0224 23:13:34.555641 169126 api_server.go:146] Checking apiserver status ... I0224 23:13:34.555713 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. | W0224 23:13:34.623041 169126 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.*: Process exited with status 1 stdout:

stderr: I0224 23:13:34.623080 169126 kubeadm.go:552] needs reconfigure: apiserver in state Stopped I0224 23:13:34.623106 169126 kubeadm.go:991] stopping kube-system containers ... I0224 23:13:34.623662 169126 sshrunner.go:149] Run: docker ps -a --filter=name=k8s.*(kube-system) --format={{.ID}} - I0224 23:13:34.832590 169126 docker.go:236] Stopping containers: [66292721bd2c ece46c590835 604359a418e2 930090cb4688 5ffc59139863 6032ddb7e2b0 f643034155a7 e4ea59006011 4b82e172e5f4 8d178266ab67 f20e668ed106 cf1aaf1cc89a f760758d5285 3e37a6b3450e 3e35acde36b4 b041d9dcbfd4 0badbb421a98 54a9a5b87ebd 476f3dda2d4a 4526c1ed80ec d004d0215daf d7b13c6ebadd 2d9c9b51cf83 23f54c6534af f12bd6b4f4e3 93a80bce96f4 8e4ec63cf46d 31ccc04e2d06 47a9815a012f a0829d2572d1] I0224 23:13:34.832915 169126 ssh_runner.go:149] Run: docker stop 66292721bd2c ece46c590835 604359a418e2 930090cb4688 5ffc59139863 6032ddb7e2b0 f643034155a7 e4ea59006011 4b82e172e5f4 8d178266ab67 f20e668ed106 cf1aaf1cc89a f760758d5285 3e37a6b3450e 3e35acde36b4 b041d9dcbfd4 0badbb421a98 54a9a5b87ebd 476f3dda2d4a 4526c1ed80ec d004d0215daf d7b13c6ebadd 2d9c9b51cf83 23f54c6534af f12bd6b4f4e3 93a80bce96f4 8e4ec63cf46d 31ccc04e2d06 47a9815a012f a0829d2572d1 | I0224 23:13:35.051594 169126 ssh_runner.go:149] Run: sudo systemctl stop kubelet / I0224 23:13:35.120156 169126 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0224 23:13:35.159759 169126 kubeadm.go:152] found existing configuration files: -rw-------. 1 root root 5615 Feb 24 11:03 /etc/kubernetes/admin.conf -rw-------. 1 root root 5632 Feb 24 17:09 /etc/kubernetes/controller-manager.conf -rw-------. 1 root root 1971 Feb 24 11:04 /etc/kubernetes/kubelet.conf -rw-------. 1 root root 5576 Feb 24 17:09 /etc/kubernetes/scheduler.conf

I0224 23:13:35.159908 169126 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0224 23:13:35.210600 169126 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf - I0224 23:13:35.242311 169126 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0224 23:13:35.280108 169126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1 stdout:

stderr: I0224 23:13:35.280804 169126 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0224 23:13:35.325827 169126 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf \ I0224 23:13:35.368405 169126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1 stdout:

stderr: I0224 23:13:35.368556 169126 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0224 23:13:35.410669 169126 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml | I0224 23:13:35.463181 169126 kubeadm.go:649] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0224 23:13:35.463233 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" \ I0224 23:13:36.184966 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" - I0224 23:13:38.868854 169126 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.683828615s) I0224 23:13:38.868903 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" / I0224 23:13:40.063812 169126 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (1.194878654s) I0224 23:13:40.063860 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" - I0224 23:13:40.931915 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" \ I0224 23:13:41.841483 169126 api_server.go:48] waiting for apiserver process to appear ... I0224 23:13:41.841767 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. | I0224 23:13:42.408556 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. / I0224 23:13:42.908502 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. - I0224 23:13:43.409066 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. \ I0224 23:13:43.907989 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. | I0224 23:13:44.406999 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. | I0224 23:13:44.909648 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. / I0224 23:13:45.409717 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. - I0224 23:13:45.908312 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. \ I0224 23:13:46.418204 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. | I0224 23:13:46.908965 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. / I0224 23:13:47.408964 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. / I0224 23:13:47.911390 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. - I0224 23:13:48.416421 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. \ I0224 23:13:48.908362 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. - I0224 23:13:49.178642 169126 api_server.go:68] duration metric: took 7.337159482s to wait for apiserver process to appear ... I0224 23:13:49.178695 169126 api_server.go:84] waiting for apiserver healthz status ... I0224 23:13:49.178728 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0224 23:13:49.179078 169126 api_server.go:231] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refus/ I0224 23:13:49.903926 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... \ I0224 23:14:10.463104 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0224 23:14:10.463171 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} - I0224 23:14:10.682030 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0224 23:14:10.757672 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [-]poststarthook/bootstrap-controller failed: reason withheld [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [-]poststarthook/apiservice-registration-controller failed: reason withheld [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0224 23:14:10.757736 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [-]poststarthook/bootstrap-controller failed: reason withheld [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [-]poststarthook/apiservice-registration-controller failed: reason withheld [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed - I0224 23:14:11.182588 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... \ I0224 23:14:11.250222 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0224 23:14:11.250334 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed \ I0224 23:14:11.684585 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... | I0224 23:14:11.768235 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0224 23:14:11.768407 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed | I0224 23:14:12.179515 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0224 23:14:12.244436 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0224 23:14:12.244515 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed | I0224 23:14:12.684032 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... / I0224 23:14:12.758601 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0224 23:14:12.758964 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed / I0224 23:14:13.181824 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... - I0224 23:14:13.250536 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed W0224 23:14:13.251908 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok healthz check failed - I0224 23:14:13.682292 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... \ I0224 23:14:13.756090 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 200: ok I0224 23:14:13.811387 169126 api_server.go:137] control plane version: v1.20.2 I0224 23:14:13.811452 169126 api_server.go:127] duration metric: took 24.632740302s to wait for apiserver health ... I0224 23:14:13.811499 169126 cni.go:74] Creating CNI manager for "" I0224 23:14:13.811512 169126 cni.go:139] CNI unnecessary in this configuration, recommending no CNI I0224 23:14:13.811527 169126 system_pods.go:41] waiting for kube-system pods to appear ... I0224 23:14:13.841432 169126 system_pods.go:57] 8 kube-system pods found I0224 23:14:13.841504 169126 system_pods.go:59] "coredns-74ff55c5b-k9d8b" [a3254a74-dc53-44b9-82e2-73e2b1f4b125] Running I0224 23:14:13.841527 169126 system_pods.go:59] "etcd-minikube" [fea0c7c6-3b55-465d-b31b-70a5cca5c3ef] Running I0224 23:14:13.841545 169126 system_pods.go:59] "kube-apiserver-minikube" [c4b2bbe1-2591-43ca-9499-d50c16366cc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0224 23:14:13.841561 169126 system_pods.go:59] "kube-controller-manager-minikube" [7497e591-8765-4d2a-aa63-19c5889fede7] Running I0224 23:14:13.841572 169126 system_pods.go:59] "kube-proxy-4rhg5" [c9342259-9481-4ee7-9c1d-9cd4fb5c3a28] Running I0224 23:14:13.841583 169126 system_pods.go:59] "kube-scheduler-minikube" [8cd967d5-a160-4079-8a0e-e3accead6426] Running I0224 23:14:13.841593 169126 system_pods.go:59] "kubernetes-dashboard-6ff6454fdc-f85c8" [e22fea73-3de8-4ad1-92ad-643c4b8bf0c6] Running I0224 23:14:13.841608 169126 system_pods.go:59] "storage-provisioner" [896d9968-fe67-49c4-aef1-933c8c8a90df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0224 23:14:13.841623 169126 system_pods.go:72] duration metric: took 30.086471ms to wait for pod list to return data ... I0224 23:14:13.841640 169126 node_conditions.go:101] verifying NodePressure condition ... | I0224 23:14:13.869866 169126 node_conditions.go:121] node storage ephemeral capacity is 47940740Ki I0224 23:14:13.869897 169126 node_conditions.go:122] node cpu capacity is 2 I0224 23:14:13.869911 169126 node_conditions.go:104] duration metric: took 28.2629ms to run NodePressure ... I0224 23:14:13.869942 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" / I0224 23:14:17.446831 169126 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (3.576862186s) I0224 23:14:17.446886 169126 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" \ I0224 23:14:17.582365 169126 ops.go:34] apiserver oom_adj: -16 I0224 23:14:17.582396 169126 kubeadm.go:577] restartCluster took 43.131356776s I0224 23:14:17.582412 169126 kubeadm.go:372] StartCluster complete in 43.353276926s I0224 23:14:17.582447 169126 settings.go:142] acquiring lock: {Name:mkcd10eec52e3f27fe3df2047d7cce087c547982 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0224 23:14:17.582897 169126 settings.go:150] Updating kubeconfig: /home/vikas/.kube/config I0224 23:14:17.585864 169126 lock.go:36] WriteFile acquiring /home/vikas/.kube/config: {Name:mk242a5823dc193988354b0e26363bce478889ed Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0224 23:14:17.588761 169126 start.go:198] Will wait 6m0s for node up to I0224 23:14:17.588809 169126 addons.go:375] enableAddons start: toEnable=map[ambassador:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[] I0224 23:14:17.589130 169126 addons.go:55] Setting storage-provisioner=true in profile "minikube" I0224 23:14:17.589168 169126 addons.go:131] Setting addon storage-provisioner=true in "minikube" W0224 23:14:17.589179 169126 addons.go:140] addon storage-provisioner should already be in state true I0224 23:14:17.589206 169126 host.go:66] Checking if "minikube" exists ... I0224 23:14:17.590057 169126 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}} I0224 23:14:17.619374 169126 out.go:119] * Verifying Kubernetes components...

vcashadoop commented 3 years ago

If you run minikube in a VM, you need to run the browser in the VM.

Can you please guide me how to do this? Because I have other products installed in the VM and can be accessed in the local browser.

vcashadoop commented 3 years ago

Just to add additional details, I did the below setting in Oracle VM also. minikube ip 192.168.49.2

]$ minikube dashboard --url=true

image

afbjorklund commented 3 years ago

If you run minikube in a VM, you need to run the browser in the VM.

Can you please guide me how to do this? Because I have other products installed in the VM and can be accessed in the local browser.

You say that you are running CentOS 7, so I think it comes with Firefox ?

yum install firefox

1.Installed Oracle VM in my laptop, and jave installed minikube in the CENTOS 7 running in the VM

Normally we just let minikube handle the VM, and run with normal browser.

But you should be able to reach it over the forwarded port (127.0.0.1:8002)

vcashadoop commented 3 years ago

If you run minikube in a VM, you need to run the browser in the VM.

Can you please guide me how to do this? Because I have other products installed in the VM and can be accessed in the local browser.

You say that you are running CentOS 7, so I think it comes with Firefox ?

yum install firefox

1.Installed Oracle VM in my laptop, and jave installed minikube in the CENTOS 7 running in the VM

Normally we just let minikube handle the VM, and run with normal browser.

But you should be able to reach it over the forwarded port (127.0.0.1:8002)

I'm using non GUI due to some restrictions. Could you guide how to do in the non GUI one

medyagh commented 3 years ago

@vcashadoop without Gui you could use a cli tool like https://github.com/derailed/k9s or just good old Curl (that would be harder to read)

does that answer your question ? if not plz reopen this

harika1210 commented 2 years ago

Hi, I am very new to minikube. I am using Linux server on my Oracle VM and have installed minikube. Right now, I am experiencing the same problem. I am not able to open the minikube dashboard in my local machine, not with the url I am given. It's not opening automatically either. Is there any solution to this?

nuxwin commented 7 months ago
  1. Start the dashboard as follow:
    $ minikube dashboard --port 8080 --url
  2. Run the following command in terminal of your workstation:
    $ ssh -L 8080:127.0.0.1:8080 <user>@<vmip>

where

  1. \<user>: is your ssh user
  2. \<vmip>: is your virtual machine IP (LAN)

Then access the dashboard from your workstation through browser at http:127.0.0.1:8080