Closed vcashadoop closed 3 years ago
Hi @vcashadoop! Thanks for reporting this. Could you provide logs of minikube start -v=5 --alsologtostderr
?
If you run minikube in a VM, you need to run the browser in the VM.
minikube start -v=5 --alsologtostderr
Hi @ilya-zuyev , please find the logs as below. minikube start -v=5 --alsologtostderr I0224 23:13:16.123627 169126 out.go:229] Setting OutFile to fd 1 ... I0224 23:13:16.127685 169126 out.go:276] TERM=xterm,COLORTERM=, which probably does not support color I0224 23:13:16.127710 169126 out.go:242] Setting ErrFile to fd 2... I0224 23:13:16.127720 169126 out.go:276] TERM=xterm,COLORTERM=, which probably does not support color I0224 23:13:16.128021 169126 root.go:291] Updating PATH: /home/vikas/.minikube/ bin W0224 23:13:16.128647 169126 root.go:266] Error reading config file at /home/vi kas/.minikube/config/config.json: open /home/vikas/.minikube/config/config.json: no such file or directory I0224 23:13:16.129827 169126 out.go:236] Setting JSON to false I0224 23:13:16.148798 169126 start.go:106] hostinfo: {"hostname":"localhost.loc aldomain","uptime":31571,"bootTime":1614157025,"procs":119,"os":"linux","platfor m":"centos","platformFamily":"rhel","platformVersion":"7.9.2009","kernelVersion" :"3.10.0-1062.el7.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtu alizationRole":"","hostId":"35ce9ced-c0c6-a746-91db-a2196f30bf3f"} I0224 23:13:16.148994 169126 start.go:116] virtualization: I0224 23:13:16.170594 169126 out.go:119] * minikube v1.17.1 on Centos 7.9.2009
I0224 23:13:25.402728 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0224 23:13:25.613158 169126 main.go:119] libmachine: Using SSH client type: native
I0224 23:13:25.613673 169126 main.go:119] libmachine: &{{{
if ! grep -xq '.*\sminikube' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
else
echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
fi
fi
I0224 23:13:25.960660 169126 main.go:119] libmachine: SSH cmd err, output:
I0224 23:13:28.332462 169126 ubuntu.go:71] root file system type: overlay
I0224 23:13:28.333275 169126 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
I0224 23:13:28.333675 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0224 23:13:28.537872 169126 main.go:119] libmachine: Using SSH client type: native
I0224 23:13:28.538140 169126 main.go:119] libmachine: &{{{
[Service] Type=notify Restart=on-failure StartLimitBurst=3 StartLimitIntervalSec=60
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
TasksMax=infinity TimeoutStartSec=0
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0224 23:13:28.802579 169126 main.go:119] libmachine: SSH cmd err, output:
[Service] Type=notify Restart=on-failure StartLimitBurst=3 StartLimitIntervalSec=60
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
TasksMax=infinity TimeoutStartSec=0
Delegate=yes
KillMode=process
[Install] WantedBy=multi-user.target
I0224 23:13:28.802756 169126 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0224 23:13:28.985582 169126 main.go:119] libmachine: Using SSH client type: native
I0224 23:13:28.986773 169126 main.go:119] libmachine: &{{{
-- /stdout -- I0224 23:13:32.124956 169126 docker.go:326] Images already preloaded, skipping extraction I0224 23:13:32.125063 169126 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} / I0224 23:13:32.290710 169126 docker.go:389] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 gcr.io/k8s-minikube/storage-provisioner:v4 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 kubernetesui/dashboard:v2.0.0-beta8 kubernetesui/metrics-scraper:v1.0.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
-- /stdout -- I0224 23:13:32.290757 169126 cache_images.go:73] Images are preloaded, skipping loading I0224 23:13:32.290852 169126 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} - I0224 23:13:32.763309 169126 cni.go:74] Creating CNI manager for "" I0224 23:13:32.763354 169126 cni.go:139] CNI unnecessary in this configuration, recommending no CNI I0224 23:13:32.763873 169126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0224 23:13:32.763929 169126 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0224 23:13:32.764148 169126 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens:
apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local"
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 192.168.49.2:10249
I0224 23:13:32.764875 169126 kubeadm.go:868] kubelet [Unit] Wants=docker.socket
[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0224 23:13:32.765049 169126 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
I0224 23:13:32.810154 169126 binaries.go:44] Found k8s binaries, skipping transfer
I0224 23:13:32.810258 169126 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube \ I0224 23:13:32.859826 169126 ssh_runner.go:310] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) | I0224 23:13:32.949692 169126 ssh_runner.go:310] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0224 23:13:33.019709 169126 ssh_runner.go:310] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1845 bytes) / I0224 23:13:33.099782 169126 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0224 23:13:33.111787 169126 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" - I0224 23:13:33.164861 169126 certs.go:52] Setting up /home/vikas/.minikube/profiles/minikube for IP: 192.168.49.2
I0224 23:13:33.164970 169126 certs.go:171] skipping minikubeCA CA generation: /home/vikas/.minikube/ca.key
I0224 23:13:33.165007 169126 certs.go:171] skipping proxyClientCA CA generation: /home/vikas/.minikube/proxy-client-ca.key
I0224 23:13:33.165102 169126 certs.go:275] skipping minikube-user signed cert generation: /home/vikas/.minikube/profiles/minikube/client.key
I0224 23:13:33.165137 169126 certs.go:275] skipping minikube signed cert generation: /home/vikas/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0224 23:13:33.165175 169126 certs.go:275] skipping aggregator signed cert generation: /home/vikas/.minikube/profiles/minikube/proxy-client.key
I0224 23:13:33.165192 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0224 23:13:33.165217 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0224 23:13:33.165239 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0224 23:13:33.165261 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0224 23:13:33.165281 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0224 23:13:33.165301 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0224 23:13:33.165323 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0224 23:13:33.165346 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0224 23:13:33.165453 169126 certs.go:354] found cert: /home/vikas/.minikube/certs/home/vikas/.minikube/certs/ca-key.pem (1675 bytes)
I0224 23:13:33.165527 169126 certs.go:354] found cert: /home/vikas/.minikube/certs/home/vikas/.minikube/certs/ca.pem (1074 bytes)
I0224 23:13:33.165584 169126 certs.go:354] found cert: /home/vikas/.minikube/certs/home/vikas/.minikube/certs/cert.pem (1119 bytes)
I0224 23:13:33.165631 169126 certs.go:354] found cert: /home/vikas/.minikube/certs/home/vikas/.minikube/certs/key.pem (1675 bytes)
I0224 23:13:33.165682 169126 vm_assets.go:96] NewFileAsset: /home/vikas/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0224 23:13:33.167781 169126 ssh_runner.go:310] scp /home/vikas/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) \ I0224 23:13:33.256351 169126 ssh_runner.go:310] scp /home/vikas/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) | I0224 23:13:33.347918 169126 ssh_runner.go:310] scp /home/vikas/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) / I0224 23:13:33.448274 169126 ssh_runner.go:310] scp /home/vikas/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) - I0224 23:13:33.540491 169126 ssh_runner.go:310] scp /home/vikas/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0224 23:13:33.630013 169126 ssh_runner.go:310] scp /home/vikas/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) \ I0224 23:13:33.734659 169126 ssh_runner.go:310] scp /home/vikas/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) | I0224 23:13:33.823828 169126 ssh_runner.go:310] scp /home/vikas/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) / I0224 23:13:33.909719 169126 ssh_runner.go:310] scp /home/vikas/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) - I0224 23:13:33.998509 169126 ssh_runner.go:310] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) \ I0224 23:13:34.062767 169126 ssh_runner.go:149] Run: openssl version
I0224 23:13:34.096545 169126 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0224 23:13:34.142840 169126 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem | I0224 23:13:34.163966 169126 certs.go:395] hashing: -rw-r--r--. 1 root root 1111 Feb 24 11:03 /usr/share/ca-certificates/minikubeCA.pem
I0224 23:13:34.164200 169126 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0224 23:13:34.191166 169126 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0224 23:13:34.229148 169126 kubeadm.go:370] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:
stderr:
I0224 23:13:34.493943 169126 kubeconfig.go:117] verify returned: extract IP: "minikube" does not appear in /home/vikas/.kube/config
I0224 23:13:34.494159 169126 kubeconfig.go:128] "minikube" context is missing from /home/vikas/.kube/config - will repair!
I0224 23:13:34.494784 169126 lock.go:36] WriteFile acquiring /home/vikas/.kube/config: {Name:mk242a5823dc193988354b0e26363bce478889ed Clock:{} Delay:500ms Timeout:1m0s Cancel:
stderr: I0224 23:13:34.623080 169126 kubeadm.go:552] needs reconfigure: apiserver in state Stopped I0224 23:13:34.623106 169126 kubeadm.go:991] stopping kube-system containers ... I0224 23:13:34.623662 169126 sshrunner.go:149] Run: docker ps -a --filter=name=k8s.*(kube-system) --format={{.ID}} - I0224 23:13:34.832590 169126 docker.go:236] Stopping containers: [66292721bd2c ece46c590835 604359a418e2 930090cb4688 5ffc59139863 6032ddb7e2b0 f643034155a7 e4ea59006011 4b82e172e5f4 8d178266ab67 f20e668ed106 cf1aaf1cc89a f760758d5285 3e37a6b3450e 3e35acde36b4 b041d9dcbfd4 0badbb421a98 54a9a5b87ebd 476f3dda2d4a 4526c1ed80ec d004d0215daf d7b13c6ebadd 2d9c9b51cf83 23f54c6534af f12bd6b4f4e3 93a80bce96f4 8e4ec63cf46d 31ccc04e2d06 47a9815a012f a0829d2572d1] I0224 23:13:34.832915 169126 ssh_runner.go:149] Run: docker stop 66292721bd2c ece46c590835 604359a418e2 930090cb4688 5ffc59139863 6032ddb7e2b0 f643034155a7 e4ea59006011 4b82e172e5f4 8d178266ab67 f20e668ed106 cf1aaf1cc89a f760758d5285 3e37a6b3450e 3e35acde36b4 b041d9dcbfd4 0badbb421a98 54a9a5b87ebd 476f3dda2d4a 4526c1ed80ec d004d0215daf d7b13c6ebadd 2d9c9b51cf83 23f54c6534af f12bd6b4f4e3 93a80bce96f4 8e4ec63cf46d 31ccc04e2d06 47a9815a012f a0829d2572d1 | I0224 23:13:35.051594 169126 ssh_runner.go:149] Run: sudo systemctl stop kubelet / I0224 23:13:35.120156 169126 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0224 23:13:35.159759 169126 kubeadm.go:152] found existing configuration files: -rw-------. 1 root root 5615 Feb 24 11:03 /etc/kubernetes/admin.conf -rw-------. 1 root root 5632 Feb 24 17:09 /etc/kubernetes/controller-manager.conf -rw-------. 1 root root 1971 Feb 24 11:04 /etc/kubernetes/kubelet.conf -rw-------. 1 root root 5576 Feb 24 17:09 /etc/kubernetes/scheduler.conf
I0224 23:13:35.159908 169126 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0224 23:13:35.210600 169126 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf - I0224 23:13:35.242311 169126 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0224 23:13:35.280108 169126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1 stdout:
stderr: I0224 23:13:35.280804 169126 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0224 23:13:35.325827 169126 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf \ I0224 23:13:35.368405 169126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1 stdout:
stderr:
I0224 23:13:35.368556 169126 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0224 23:13:35.410669 169126 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml | I0224 23:13:35.463181 169126 kubeadm.go:649] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0224 23:13:35.463233 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" \ I0224 23:13:36.184966 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" - I0224 23:13:38.868854 169126 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.683828615s)
I0224 23:13:38.868903 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" / I0224 23:13:40.063812 169126 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (1.194878654s)
I0224 23:13:40.063860 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" - I0224 23:13:40.931915 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" \ I0224 23:13:41.841483 169126 api_server.go:48] waiting for apiserver process to appear ...
I0224 23:13:41.841767 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. | I0224 23:13:42.408556 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. / I0224 23:13:42.908502 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. - I0224 23:13:43.409066 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. \ I0224 23:13:43.907989 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. | I0224 23:13:44.406999 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. | I0224 23:13:44.909648 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. / I0224 23:13:45.409717 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. - I0224 23:13:45.908312 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. \ I0224 23:13:46.418204 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. | I0224 23:13:46.908965 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. / I0224 23:13:47.408964 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. / I0224 23:13:47.911390 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. - I0224 23:13:48.416421 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. \ I0224 23:13:48.908362 169126 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. - I0224 23:13:49.178642 169126 api_server.go:68] duration metric: took 7.337159482s to wait for apiserver process to appear ...
I0224 23:13:49.178695 169126 api_server.go:84] waiting for apiserver healthz status ...
I0224 23:13:49.178728 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0224 23:13:49.179078 169126 api_server.go:231] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refus/ I0224 23:13:49.903926 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... \ I0224 23:14:10.463104 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0224 23:14:10.463171 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} - I0224 23:14:10.682030 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0224 23:14:10.757672 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0224 23:14:10.757736 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed - I0224 23:14:11.182588 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... \ I0224 23:14:11.250222 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0224 23:14:11.250334 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed \ I0224 23:14:11.684585 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... | I0224 23:14:11.768235 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0224 23:14:11.768407 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed | I0224 23:14:12.179515 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0224 23:14:12.244436 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0224 23:14:12.244515 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed | I0224 23:14:12.684032 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... / I0224 23:14:12.758601 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0224 23:14:12.758964 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed / I0224 23:14:13.181824 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... - I0224 23:14:13.250536 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0224 23:14:13.251908 169126 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed - I0224 23:14:13.682292 169126 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... \ I0224 23:14:13.756090 169126 api_server.go:241] https://192.168.49.2:8443/healthz returned 200:
ok
I0224 23:14:13.811387 169126 api_server.go:137] control plane version: v1.20.2
I0224 23:14:13.811452 169126 api_server.go:127] duration metric: took 24.632740302s to wait for apiserver health ...
I0224 23:14:13.811499 169126 cni.go:74] Creating CNI manager for ""
I0224 23:14:13.811512 169126 cni.go:139] CNI unnecessary in this configuration, recommending no CNI
I0224 23:14:13.811527 169126 system_pods.go:41] waiting for kube-system pods to appear ...
I0224 23:14:13.841432 169126 system_pods.go:57] 8 kube-system pods found
I0224 23:14:13.841504 169126 system_pods.go:59] "coredns-74ff55c5b-k9d8b" [a3254a74-dc53-44b9-82e2-73e2b1f4b125] Running
I0224 23:14:13.841527 169126 system_pods.go:59] "etcd-minikube" [fea0c7c6-3b55-465d-b31b-70a5cca5c3ef] Running
I0224 23:14:13.841545 169126 system_pods.go:59] "kube-apiserver-minikube" [c4b2bbe1-2591-43ca-9499-d50c16366cc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0224 23:14:13.841561 169126 system_pods.go:59] "kube-controller-manager-minikube" [7497e591-8765-4d2a-aa63-19c5889fede7] Running
I0224 23:14:13.841572 169126 system_pods.go:59] "kube-proxy-4rhg5" [c9342259-9481-4ee7-9c1d-9cd4fb5c3a28] Running
I0224 23:14:13.841583 169126 system_pods.go:59] "kube-scheduler-minikube" [8cd967d5-a160-4079-8a0e-e3accead6426] Running
I0224 23:14:13.841593 169126 system_pods.go:59] "kubernetes-dashboard-6ff6454fdc-f85c8" [e22fea73-3de8-4ad1-92ad-643c4b8bf0c6] Running
I0224 23:14:13.841608 169126 system_pods.go:59] "storage-provisioner" [896d9968-fe67-49c4-aef1-933c8c8a90df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0224 23:14:13.841623 169126 system_pods.go:72] duration metric: took 30.086471ms to wait for pod list to return data ...
I0224 23:14:13.841640 169126 node_conditions.go:101] verifying NodePressure condition ... | I0224 23:14:13.869866 169126 node_conditions.go:121] node storage ephemeral capacity is 47940740Ki
I0224 23:14:13.869897 169126 node_conditions.go:122] node cpu capacity is 2
I0224 23:14:13.869911 169126 node_conditions.go:104] duration metric: took 28.2629ms to run NodePressure ...
I0224 23:14:13.869942 169126 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" / I0224 23:14:17.446831 169126 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (3.576862186s)
I0224 23:14:17.446886 169126 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" \ I0224 23:14:17.582365 169126 ops.go:34] apiserver oom_adj: -16
I0224 23:14:17.582396 169126 kubeadm.go:577] restartCluster took 43.131356776s
I0224 23:14:17.582412 169126 kubeadm.go:372] StartCluster complete in 43.353276926s
I0224 23:14:17.582447 169126 settings.go:142] acquiring lock: {Name:mkcd10eec52e3f27fe3df2047d7cce087c547982 Clock:{} Delay:500ms Timeout:1m0s Cancel:
If you run minikube in a VM, you need to run the browser in the VM.
Can you please guide me how to do this? Because I have other products installed in the VM and can be accessed in the local browser.
Just to add additional details, I did the below setting in Oracle VM also. minikube ip 192.168.49.2
]$ minikube dashboard --url=true
If you run minikube in a VM, you need to run the browser in the VM.
Can you please guide me how to do this? Because I have other products installed in the VM and can be accessed in the local browser.
You say that you are running CentOS 7, so I think it comes with Firefox ?
yum install firefox
1.Installed Oracle VM in my laptop, and jave installed minikube in the CENTOS 7 running in the VM
Normally we just let minikube handle the VM, and run with normal browser.
But you should be able to reach it over the forwarded port (127.0.0.1:8002)
If you run minikube in a VM, you need to run the browser in the VM.
Can you please guide me how to do this? Because I have other products installed in the VM and can be accessed in the local browser.
You say that you are running CentOS 7, so I think it comes with Firefox ?
yum install firefox
1.Installed Oracle VM in my laptop, and jave installed minikube in the CENTOS 7 running in the VM
Normally we just let minikube handle the VM, and run with normal browser.
But you should be able to reach it over the forwarded port (127.0.0.1:8002)
I'm using non GUI due to some restrictions. Could you guide how to do in the non GUI one
@vcashadoop without Gui you could use a cli tool like https://github.com/derailed/k9s or just good old Curl (that would be harder to read)
does that answer your question ? if not plz reopen this
Hi, I am very new to minikube. I am using Linux server on my Oracle VM and have installed minikube. Right now, I am experiencing the same problem. I am not able to open the minikube dashboard in my local machine, not with the url I am given. It's not opening automatically either. Is there any solution to this?
$ minikube dashboard --port 8080 --url
$ ssh -L 8080:127.0.0.1:8080 <user>@<vmip>
where
Then access the dashboard from your workstation through browser at http:127.0.0.1:8080
Steps to reproduce the issue:
1.Installed Oracle VM in my laptop, and jave installed minikube in the CENTOS 7 running in the VM
when I try to access with the above url like http://127.0.0.1:8001 or 80 not abl top open in my local browser.
Please let me know how to access this in my windows browser