kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.14k stars 4.86k forks source link

cannot access application on VPN with docker driver #8592

Closed l0n3star closed 4 years ago

l0n3star commented 4 years ago

Steps to reproduce the issue:

OSX 10.15.5 Docker version 19.03.8 kubectl version 1.16.6-beta.0 minikube version 1.11.0

I started minikube with:
minikube start --driver=docker

I deployed this deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-deploy
spec:
  replicas: 10
  selector:
    matchLabels:
      app: hello-world
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-pod
        image: nigelpoulton/k8sbook:latest
        ports:
        - containerPort: 8080

I deployed this service:

apiVersion: v1
 kind: Service
 metadata:
    name: hello-svc
    labels:
      app: hello-world
 spec:
    type: NodePort
    ports:
    - port: 8080
      nodePort: 30001
      protocol: TCP
    selector:
      app: hello-world

The app is a trivial nodejs web app that runs on port 8080:

// Sample node.js web app for Pluralsight Docker CI course
// For demonstration purposes only
'use strict';
var express = require('express'),
app = express();
app.set('views', 'views');
app.set('view engine', 'pug');
app.get('/', function(req, res) {
res.render('home', {
});
});
app.listen(8080);
module.exports.getApp = app;

The app is running:

k8s kubectl exec -it hello-deploy-79f969cff6-4d4cf sh
sh-4.2# 
sh-4.2# curl localhost:8080
<html><head><title>ACG loves K8S</title><link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css"/></head><body><div class="container"><div class="jumbotron"><h1>The Kubernetes Book!!!</h1><p></p><p> <a class="btn btn-primary" href="https://acloud.guru/learn/kubernetes-deep-dive">What about a video course as well!</a></p><p></p></div></div></body></html>sh-4.2#

But if I curl from macOS it doesn't work:

 ➜ k8s minikube ip
127.0.0.1

➜ k8s curl 127.0.0.1:30001
curl: (7) Failed to connect to localhost port 30001: Connection refused

If I switch the driver to hyperkit it works fine but I have to disconnect from VPN. As I'm working from home due to covid19 this is very inconvenient.

Full output of failed command:

curl 127.0.0.1:30001 curl: (7) Failed to connect to localhost port 30001: Connection refused

Full output of minikube start command used, if not already included:

k8s minikube start --driver=docker 😄 minikube v1.11.0 on Darwin 10.15.5 ✨ Using the docker driver based on user configuration 👍 Starting control plane node minikube in cluster minikube 🔥 Creating docker container (CPUs=2, Memory=8100MB) ... 🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.2 ... ▪ kubeadm.pod-network-cidr=10.244.0.0/16 🔎 Verifying Kubernetes components... 🌟 Enabled addons: default-storageclass, storage-provisioner 🏄 Done! kubectl is now configured to use "minikube"

❗ /usr/local/bin/kubectl is version 1.16.6-beta.0, which may be incompatible with Kubernetes 1.18.3. 💡 You can also use 'minikube kubectl -- get pods' to invoke a matching version ➜ k8s

Optional: Full output of minikube logs command:

k8s minikube logs
==> Docker <==
-- Logs begin at Sun 2020-06-28 19:38:40 UTC, end at Sun 2020-06-28 20:10:44 UTC. --
Jun 28 19:38:40 minikube systemd[1]: Starting Docker Application Container Engine...
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.337431400Z" level=info msg="Starting up"
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.340977700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.341081300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.341115700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.341154600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.341553300Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000487030, CONNECTING" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.341712300Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.343984000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000487030, READY" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.346671500Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.346721600Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.346748200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.346772900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.346844500Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007223f0, CONNECTING" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.346851000Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.347380100Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007223f0, READY" module=grpc
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.369715400Z" level=info msg="Loading containers: start."
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.452280600Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.505904900Z" level=info msg="Loading containers: done."
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.634060800Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.634919000Z" level=info msg="Daemon has completed initialization"
Jun 28 19:38:40 minikube systemd[1]: Started Docker Application Container Engine.
Jun 28 19:38:40 minikube dockerd[115]: time="2020-06-28T19:38:40.682334400Z" level=info msg="API listen on /run/docker.sock"
Jun 28 19:38:51 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Jun 28 19:38:51 minikube systemd[1]: Stopping Docker Application Container Engine...
Jun 28 19:38:51 minikube dockerd[115]: time="2020-06-28T19:38:51.407371000Z" level=info msg="Processing signal 'terminated'"
Jun 28 19:38:51 minikube dockerd[115]: time="2020-06-28T19:38:51.408520900Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jun 28 19:38:51 minikube dockerd[115]: time="2020-06-28T19:38:51.409180600Z" level=info msg="Daemon shutdown complete"
Jun 28 19:38:51 minikube systemd[1]: docker.service: Succeeded.
Jun 28 19:38:51 minikube systemd[1]: Stopped Docker Application Container Engine.
Jun 28 19:38:51 minikube systemd[1]: Starting Docker Application Container Engine...
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.488069400Z" level=info msg="Starting up"
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.490968100Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.491019700Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.491047900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.491088600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.491186700Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000770ce0, CONNECTING" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.491232000Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.491884200Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000770ce0, READY" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.493071100Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.493098100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.493118800Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.493142200Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.493197700Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00089e160, CONNECTING" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.493681600Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00089e160, READY" module=grpc
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.497039200Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.505999600Z" level=info msg="Loading containers: start."
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.614193100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.659988400Z" level=info msg="Loading containers: done."
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.684169200Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.684261900Z" level=info msg="Daemon has completed initialization"
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.708604700Z" level=info msg="API listen on /var/run/docker.sock"
Jun 28 19:38:51 minikube systemd[1]: Started Docker Application Container Engine.
Jun 28 19:38:51 minikube dockerd[354]: time="2020-06-28T19:38:51.708936400Z" level=info msg="API listen on [::]:2376"

==> container status <==
CONTAINER           IMAGE                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID
3c43e0e7dd3b5       nigelpoulton/k8sbook@sha256:ec91619aa6cbe340636909e2b15fa08d230f6cd4ea43b270dfdeb5cb1a899af1   26 minutes ago      Running             hello-pod                 0                   a8e9171a22c4b
914e2c17d1951       nigelpoulton/k8sbook@sha256:ec91619aa6cbe340636909e2b15fa08d230f6cd4ea43b270dfdeb5cb1a899af1   26 minutes ago      Running             hello-pod                 0                   92c76235bb6aa
ccc1e09023d93       nigelpoulton/k8sbook@sha256:ec91619aa6cbe340636909e2b15fa08d230f6cd4ea43b270dfdeb5cb1a899af1   26 minutes ago      Running             hello-pod                 0                   a10a7102f0eca
f208a9704d60f       nigelpoulton/k8sbook@sha256:ec91619aa6cbe340636909e2b15fa08d230f6cd4ea43b270dfdeb5cb1a899af1   26 minutes ago      Running             hello-pod                 0                   2895352b5447b
5d424b1bf026c       nigelpoulton/k8sbook@sha256:ec91619aa6cbe340636909e2b15fa08d230f6cd4ea43b270dfdeb5cb1a899af1   26 minutes ago      Running             hello-pod                 0                   180e089fb58ee
df9adcdbea38f       nigelpoulton/k8sbook@sha256:ec91619aa6cbe340636909e2b15fa08d230f6cd4ea43b270dfdeb5cb1a899af1   26 minutes ago      Running             hello-pod                 0                   d5eecceaf498e
26dabe2035d8a       nigelpoulton/k8sbook@sha256:ec91619aa6cbe340636909e2b15fa08d230f6cd4ea43b270dfdeb5cb1a899af1   27 minutes ago      Running             hello-pod                 0                   669994359b4cb
808dbc32c68cb       nigelpoulton/k8sbook@sha256:ec91619aa6cbe340636909e2b15fa08d230f6cd4ea43b270dfdeb5cb1a899af1   27 minutes ago      Running             hello-pod                 0                   8e8211c3029e4
ef5c1e943117e       nigelpoulton/k8sbook@sha256:ec91619aa6cbe340636909e2b15fa08d230f6cd4ea43b270dfdeb5cb1a899af1   27 minutes ago      Running             hello-pod                 0                   931a7f30054a6
076b157bc70dc       nigelpoulton/k8sbook@sha256:ec91619aa6cbe340636909e2b15fa08d230f6cd4ea43b270dfdeb5cb1a899af1   27 minutes ago      Running             hello-pod                 0                   cfb4a294af9da
f8e5c71b9b611       67da37a9a360e                                                                                  31 minutes ago      Running             coredns                   0                   13a1914e0dc47
9e2c6f4b8557d       67da37a9a360e                                                                                  31 minutes ago      Running             coredns                   0                   a7ce5d9d6b1eb
1a3522fb1ac20       4689081edb103                                                                                  31 minutes ago      Running             storage-provisioner       0                   30fe0afd3e869
811572a73c585       3439b7546f29b                                                                                  31 minutes ago      Running             kube-proxy                0                   742b0e2c53fd6
18482ad7eb8cd       7e28efa976bd1                                                                                  31 minutes ago      Running             kube-apiserver            0                   6e224927ad646
07614bd6f90d8       76216c34ed0c7                                                                                  31 minutes ago      Running             kube-scheduler            0                   bb93a3209b579
ae89868fa628d       303ce5db0e90d                                                                                  31 minutes ago      Running             etcd                      0                   29ce800c4a2db
8cd212a13654c       da26705ccb4b5                                                                                  31 minutes ago      Running             kube-controller-manager   0                   ae519f7f51c13

==> coredns [9e2c6f4b8557] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

==> coredns [f8e5c71b9b61] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

==> describe nodes <==
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_06_28T12_39_10_0700
                    minikube.k8s.io/version=v1.11.0
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 28 Jun 2020 19:39:07 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Sun, 28 Jun 2020 20:10:39 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sun, 28 Jun 2020 20:09:16 +0000   Sun, 28 Jun 2020 19:39:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 28 Jun 2020 20:09:16 +0000   Sun, 28 Jun 2020 19:39:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sun, 28 Jun 2020 20:09:16 +0000   Sun, 28 Jun 2020 19:39:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sun, 28 Jun 2020 20:09:16 +0000   Sun, 28 Jun 2020 19:39:20 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.17.0.3
  Hostname:    minikube
Capacity:
  cpu:                6
  ephemeral-storage:  61255492Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             10209432Ki
  pods:               110
Allocatable:
  cpu:                6
  ephemeral-storage:  61255492Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             10209432Ki
  pods:               110
System Info:
  Machine ID:                 3a9ad1473b564e22b4af22d5056666fb
  System UUID:                48adc106-6e1e-4103-a57c-ef4b7ddab92b
  Boot ID:                    277b7a66-d68a-4b1f-9090-beb2a57fa9d8
  Kernel Version:             4.19.76-linuxkit
  OS Image:                   Ubuntu 19.10
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.2
  Kubelet Version:            v1.18.3
  Kube-Proxy Version:         v1.18.3
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (18 in total)
  Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                ------------  ----------  ---------------  -------------  ---
  default                     hello-deploy-79f969cff6-4d4cf       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
  default                     hello-deploy-79f969cff6-6dbln       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
  default                     hello-deploy-79f969cff6-8s6kd       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
  default                     hello-deploy-79f969cff6-czc97       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
  default                     hello-deploy-79f969cff6-j4zkx       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
  default                     hello-deploy-79f969cff6-jdmlc       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
  default                     hello-deploy-79f969cff6-mdmcj       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
  default                     hello-deploy-79f969cff6-p7zxn       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
  default                     hello-deploy-79f969cff6-rll7k       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
  default                     hello-deploy-79f969cff6-tj28f       0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
  kube-system                 coredns-66bff467f8-qnnb4            100m (1%)     0 (0%)      70Mi (0%)        170Mi (1%)     31m
  kube-system                 coredns-66bff467f8-zshdm            100m (1%)     0 (0%)      70Mi (0%)        170Mi (1%)     31m
  kube-system                 etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         31m
  kube-system                 kube-apiserver-minikube             250m (4%)     0 (0%)      0 (0%)           0 (0%)         31m
  kube-system                 kube-controller-manager-minikube    200m (3%)     0 (0%)      0 (0%)           0 (0%)         31m
  kube-system                 kube-proxy-b9jwg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         31m
  kube-system                 kube-scheduler-minikube             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31m
  kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         31m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                750m (12%)  0 (0%)
  memory             140Mi (1%)  340Mi (3%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason                   Age   From                  Message
  ----    ------                   ----  ----                  -------
  Normal  Starting                 31m   kubelet, minikube     Starting kubelet.
  Normal  NodeHasSufficientMemory  31m   kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    31m   kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     31m   kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeNotReady             31m   kubelet, minikube     Node minikube status is now: NodeNotReady
  Normal  NodeAllocatableEnforced  31m   kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  NodeReady                31m   kubelet, minikube     Node minikube status is now: NodeReady
  Normal  Starting                 31m   kube-proxy, minikube  Starting kube-proxy.

==> dmesg <==
[Jun24 08:32] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A
[  +0.000808] virtio-pci 0000:00:01.0: PCI INT A: no GSI
[  +0.001620] virtio-pci 0000:00:02.0: can't derive routing for PCI INT A
[  +0.000812] virtio-pci 0000:00:02.0: PCI INT A: no GSI
[  +0.002631] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
[  +0.000935] virtio-pci 0000:00:07.0: PCI INT A: no GSI
[  +0.051137] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[  +0.605710] i8042: Can't read CTR while initializing i8042
[  +0.000855] i8042: probe of i8042 failed with error -5
[  +0.009383] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[  +0.002194] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[  +0.186140] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[  +0.019987] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[  +3.603906] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[  +0.074050] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[Jun24 10:57] Hangcheck: hangcheck value past margin!
[  +0.009447] clocksource: timekeeping watchdog on CPU2: Marking clocksource 'tsc' as unstable because the skew is too large:
[  +0.009208] clocksource:                       'hpet' wd_now: 79babccb wd_last: 789d2054 mask: ffffffff
[  +0.017005] clocksource:                       'tsc' cs_now: 14c2b2c836b0 cs_last: 3c90f194498 mask: ffffffffffffffff
[  +0.018045] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
[Jun24 11:01] Hangcheck: hangcheck value past margin!
[Jun24 21:17] hrtimer: interrupt took 4454600 ns

==> etcd [ae89868fa628] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-06-28 19:39:03.167291 I | etcdmain: etcd Version: 3.4.3
2020-06-28 19:39:03.167345 I | etcdmain: Git SHA: 3cf2f69b5
2020-06-28 19:39:03.167371 I | etcdmain: Go Version: go1.12.12
2020-06-28 19:39:03.167479 I | etcdmain: Go OS/Arch: linux/amd64
2020-06-28 19:39:03.167587 I | etcdmain: setting maximum number of CPUs to 6, total number of available CPUs is 6
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-06-28 19:39:03.167780 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-06-28 19:39:03.168601 I | embed: name = minikube
2020-06-28 19:39:03.168651 I | embed: data dir = /var/lib/minikube/etcd
2020-06-28 19:39:03.168667 I | embed: member dir = /var/lib/minikube/etcd/member
2020-06-28 19:39:03.168683 I | embed: heartbeat = 100ms
2020-06-28 19:39:03.168711 I | embed: election = 1000ms
2020-06-28 19:39:03.168757 I | embed: snapshot count = 10000
2020-06-28 19:39:03.168839 I | embed: advertise client URLs = https://172.17.0.3:2379
2020-06-28 19:39:03.177112 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2
raft2020/06/28 19:39:03 INFO: b273bc7741bcb020 switched to configuration voters=()
raft2020/06/28 19:39:03 INFO: b273bc7741bcb020 became follower at term 0
raft2020/06/28 19:39:03 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/06/28 19:39:03 INFO: b273bc7741bcb020 became follower at term 1
raft2020/06/28 19:39:03 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
2020-06-28 19:39:03.254344 W | auth: simple token is not cryptographically signed
2020-06-28 19:39:03.261575 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-06-28 19:39:03.262935 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/06/28 19:39:03 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
2020-06-28 19:39:03.263328 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
2020-06-28 19:39:03.264313 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-06-28 19:39:03.264465 I | embed: listening for metrics on http://127.0.0.1:2381
2020-06-28 19:39:03.264779 I | embed: listening for peers on 172.17.0.3:2380
raft2020/06/28 19:39:03 INFO: b273bc7741bcb020 is starting a new election at term 1
raft2020/06/28 19:39:03 INFO: b273bc7741bcb020 became candidate at term 2
raft2020/06/28 19:39:03 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2
raft2020/06/28 19:39:03 INFO: b273bc7741bcb020 became leader at term 2
raft2020/06/28 19:39:03 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2
2020-06-28 19:39:03.982667 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2
2020-06-28 19:39:03.983130 I | embed: ready to serve client requests
2020-06-28 19:39:03.983523 I | etcdserver: setting up the initial cluster version to 3.4
2020-06-28 19:39:03.984268 I | embed: ready to serve client requests
2020-06-28 19:39:03.986635 N | etcdserver/membership: set the initial cluster version to 3.4
2020-06-28 19:39:03.986722 I | etcdserver/api: enabled capabilities for version 3.4
2020-06-28 19:39:03.989567 I | embed: serving client requests on 127.0.0.1:2379
2020-06-28 19:39:03.989670 I | embed: serving client requests on 172.17.0.3:2379
2020-06-28 19:43:42.410917 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:481" took too long (479.3567ms) to execute
2020-06-28 19:43:42.412167 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:603" took too long (418.2401ms) to execute
2020-06-28 19:49:03.316601 I | mvcc: store.index: compact 1199
2020-06-28 19:49:03.333093 I | mvcc: finished scheduled compaction at 1199 (took 15.7783ms)
2020-06-28 19:54:02.976367 I | mvcc: store.index: compact 1855
2020-06-28 19:54:02.989458 I | mvcc: finished scheduled compaction at 1855 (took 12.4645ms)
2020-06-28 19:59:02.637397 I | mvcc: store.index: compact 2513
2020-06-28 19:59:02.651843 I | mvcc: finished scheduled compaction at 2513 (took 13.7956ms)
2020-06-28 20:04:02.297888 I | mvcc: store.index: compact 3167
2020-06-28 20:04:02.311461 I | mvcc: finished scheduled compaction at 3167 (took 12.8426ms)
2020-06-28 20:09:01.958306 I | mvcc: store.index: compact 3823
2020-06-28 20:09:01.972438 I | mvcc: finished scheduled compaction at 3823 (took 13.4786ms)

==> kernel <==
 20:10:45 up 4 days, 11:38,  0 users,  load average: 0.38, 0.39, 0.37
Linux minikube 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [18482ad7eb8c] <==
W0628 19:39:05.430881       1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0628 19:39:05.450478       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0628 19:39:05.457037       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0628 19:39:05.484146       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0628 19:39:05.519816       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0628 19:39:05.519861       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0628 19:39:05.537541       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0628 19:39:05.537625       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0628 19:39:05.539385       1 client.go:361] parsed scheme: "endpoint"
I0628 19:39:05.539432       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0628 19:39:05.546679       1 client.go:361] parsed scheme: "endpoint"
I0628 19:39:05.546736       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0628 19:39:07.275043       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0628 19:39:07.275121       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0628 19:39:07.275770       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0628 19:39:07.275981       1 secure_serving.go:178] Serving securely on [::]:8443
I0628 19:39:07.276090       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0628 19:39:07.277055       1 crd_finalizer.go:266] Starting CRDFinalizer
I0628 19:39:07.278744       1 controller.go:81] Starting OpenAPI AggregationController
I0628 19:39:07.279119       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0628 19:39:07.279162       1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0628 19:39:07.279193       1 available_controller.go:387] Starting AvailableConditionController
I0628 19:39:07.279210       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0628 19:39:07.279274       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0628 19:39:07.279358       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0628 19:39:07.279773       1 autoregister_controller.go:141] Starting autoregister controller
I0628 19:39:07.279810       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0628 19:39:07.287903       1 controller.go:86] Starting OpenAPI controller
I0628 19:39:07.288028       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0628 19:39:07.288132       1 naming_controller.go:291] Starting NamingConditionController
I0628 19:39:07.288325       1 establishing_controller.go:76] Starting EstablishingController
I0628 19:39:07.288509       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0628 19:39:07.288603       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0628 19:39:07.298372       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0628 19:39:07.298530       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0628 19:39:07.298994       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0628 19:39:07.299033       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0628 19:39:07.299047       1 shared_informer.go:230] Caches are synced for crd-autoregister 
E0628 19:39:07.303327       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg: 
I0628 19:39:07.383651       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0628 19:39:07.383717       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
I0628 19:39:07.383736       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0628 19:39:07.383888       1 cache.go:39] Caches are synced for autoregister controller
I0628 19:39:08.275564       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0628 19:39:08.275707       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0628 19:39:08.282847       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0628 19:39:08.288223       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0628 19:39:08.288270       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0628 19:39:08.760931       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0628 19:39:08.806376       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0628 19:39:08.901557       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.3]
I0628 19:39:08.903141       1 controller.go:606] quota admission added evaluator for: endpoints
I0628 19:39:08.909084       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0628 19:39:10.117757       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0628 19:39:10.129335       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0628 19:39:10.355540       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0628 19:39:10.645337       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0628 19:39:27.377543       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0628 19:39:27.927538       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0628 19:55:06.265036       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted

==> kube-controller-manager [8cd212a13654] <==
I0628 19:39:27.285126       1 shared_informer.go:230] Caches are synced for token_cleaner 
I0628 19:39:27.287333       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0628 19:39:27.304263       1 shared_informer.go:230] Caches are synced for job 
I0628 19:39:27.311951       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
I0628 19:39:27.320044       1 shared_informer.go:230] Caches are synced for HPA 
I0628 19:39:27.320547       1 shared_informer.go:230] Caches are synced for disruption 
I0628 19:39:27.320725       1 disruption.go:339] Sending events to api server.
I0628 19:39:27.321615       1 shared_informer.go:230] Caches are synced for ReplicaSet 
I0628 19:39:27.322013       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
I0628 19:39:27.321542       1 shared_informer.go:230] Caches are synced for ReplicationController 
W0628 19:39:27.336893       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0628 19:39:27.340015       1 shared_informer.go:230] Caches are synced for GC 
I0628 19:39:27.368246       1 shared_informer.go:230] Caches are synced for TTL 
I0628 19:39:27.372986       1 shared_informer.go:230] Caches are synced for deployment 
I0628 19:39:27.380753       1 shared_informer.go:230] Caches are synced for node 
I0628 19:39:27.381044       1 range_allocator.go:172] Starting range CIDR allocator
I0628 19:39:27.381247       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
I0628 19:39:27.381457       1 shared_informer.go:230] Caches are synced for cidrallocator 
I0628 19:39:27.380709       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"20268831-52fa-4bac-b78b-2fdd1de04883", APIVersion:"apps/v1", ResourceVersion:"179", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
I0628 19:39:27.386052       1 shared_informer.go:230] Caches are synced for endpoint_slice 
I0628 19:39:27.390261       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8774ce06-7d21-4964-853d-7d027cd89b4e", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-qnnb4
I0628 19:39:27.390659       1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I0628 19:39:27.410917       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8774ce06-7d21-4964-853d-7d027cd89b4e", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-zshdm
I0628 19:39:27.470813       1 shared_informer.go:230] Caches are synced for endpoint 
I0628 19:39:27.521692       1 shared_informer.go:230] Caches are synced for PV protection 
I0628 19:39:27.655932       1 shared_informer.go:230] Caches are synced for PVC protection 
I0628 19:39:27.670029       1 shared_informer.go:230] Caches are synced for service account 
I0628 19:39:27.676564       1 shared_informer.go:230] Caches are synced for namespace 
I0628 19:39:27.724265       1 shared_informer.go:230] Caches are synced for stateful set 
I0628 19:39:27.730273       1 shared_informer.go:230] Caches are synced for attach detach 
I0628 19:39:27.749914       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
I0628 19:39:27.819805       1 shared_informer.go:230] Caches are synced for persistent volume 
I0628 19:39:27.869960       1 shared_informer.go:230] Caches are synced for expand 
I0628 19:39:27.870999       1 shared_informer.go:230] Caches are synced for taint 
I0628 19:39:27.871161       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
I0628 19:39:27.871597       1 taint_manager.go:187] Starting NoExecuteTaintManager
W0628 19:39:27.871941       1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0628 19:39:27.872261       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"7aa5fd15-eb88-4d50-88d4-84eee5ba31b6", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0628 19:39:27.872586       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
I0628 19:39:27.903217       1 shared_informer.go:230] Caches are synced for resource quota 
I0628 19:39:27.921346       1 shared_informer.go:230] Caches are synced for daemon sets 
I0628 19:39:27.933891       1 shared_informer.go:230] Caches are synced for garbage collector 
I0628 19:39:27.933982       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0628 19:39:27.936290       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"c09ea5b4-c433-417a-87b3-11c6dd56dcf9", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-b9jwg
I0628 19:39:27.939068       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
I0628 19:39:27.946908       1 request.go:621] Throttling request took 1.0444919s, request: GET:https://control-plane.minikube.internal:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
I0628 19:39:27.987726       1 shared_informer.go:230] Caches are synced for garbage collector 
I0628 19:39:28.548311       1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0628 19:39:28.548369       1 shared_informer.go:230] Caches are synced for resource quota 
I0628 19:43:18.194846       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-deploy", UID:"1d86ee7f-3695-41f6-9116-2aa3390e28e9", APIVersion:"apps/v1", ResourceVersion:"947", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-deploy-79f969cff6 to 10
I0628 19:43:18.204686       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-deploy-79f969cff6", UID:"ebef270b-0864-4eb9-a0cc-eb72f9228607", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-deploy-79f969cff6-rll7k
I0628 19:43:18.212835       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-deploy-79f969cff6", UID:"ebef270b-0864-4eb9-a0cc-eb72f9228607", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-deploy-79f969cff6-4d4cf
I0628 19:43:18.216466       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-deploy-79f969cff6", UID:"ebef270b-0864-4eb9-a0cc-eb72f9228607", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-deploy-79f969cff6-mdmcj
I0628 19:43:18.224139       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-deploy-79f969cff6", UID:"ebef270b-0864-4eb9-a0cc-eb72f9228607", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-deploy-79f969cff6-tj28f
I0628 19:43:18.240403       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-deploy-79f969cff6", UID:"ebef270b-0864-4eb9-a0cc-eb72f9228607", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-deploy-79f969cff6-6dbln
I0628 19:43:18.240468       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-deploy-79f969cff6", UID:"ebef270b-0864-4eb9-a0cc-eb72f9228607", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-deploy-79f969cff6-p7zxn
I0628 19:43:18.245037       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-deploy-79f969cff6", UID:"ebef270b-0864-4eb9-a0cc-eb72f9228607", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-deploy-79f969cff6-8s6kd
I0628 19:43:18.258972       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-deploy-79f969cff6", UID:"ebef270b-0864-4eb9-a0cc-eb72f9228607", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-deploy-79f969cff6-j4zkx
I0628 19:43:18.259314       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-deploy-79f969cff6", UID:"ebef270b-0864-4eb9-a0cc-eb72f9228607", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-deploy-79f969cff6-czc97
I0628 19:43:18.259937       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-deploy-79f969cff6", UID:"ebef270b-0864-4eb9-a0cc-eb72f9228607", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-deploy-79f969cff6-jdmlc

==> kube-proxy [811572a73c58] <==
W0628 19:39:28.641894       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0628 19:39:28.648380       1 node.go:136] Successfully retrieved node IP: 172.17.0.3
I0628 19:39:28.648491       1 server_others.go:186] Using iptables Proxier.
I0628 19:39:28.648871       1 server.go:583] Version: v1.18.3
I0628 19:39:28.649354       1 conntrack.go:52] Setting nf_conntrack_max to 196608
I0628 19:39:28.649691       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0628 19:39:28.649991       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0628 19:39:28.650335       1 config.go:315] Starting service config controller
I0628 19:39:28.650376       1 shared_informer.go:223] Waiting for caches to sync for service config
I0628 19:39:28.650793       1 config.go:133] Starting endpoints config controller
I0628 19:39:28.650841       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0628 19:39:28.751121       1 shared_informer.go:230] Caches are synced for service config 
I0628 19:39:28.751368       1 shared_informer.go:230] Caches are synced for endpoints config 

==> kube-scheduler [07614bd6f90d] <==
I0628 19:39:03.265850       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0628 19:39:03.265950       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0628 19:39:03.988569       1 serving.go:313] Generated self-signed cert in-memory
W0628 19:39:07.356828       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0628 19:39:07.356884       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0628 19:39:07.356911       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0628 19:39:07.356929       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0628 19:39:07.370340       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0628 19:39:07.370495       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0628 19:39:07.372164       1 authorization.go:47] Authorization is disabled
W0628 19:39:07.372207       1 authentication.go:40] Authentication is disabled
I0628 19:39:07.372236       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0628 19:39:07.373593       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0628 19:39:07.373659       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0628 19:39:07.374782       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0628 19:39:07.375247       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0628 19:39:07.376052       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0628 19:39:07.380079       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0628 19:39:07.380086       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0628 19:39:07.380374       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0628 19:39:07.380761       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0628 19:39:07.382043       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0628 19:39:07.382056       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0628 19:39:07.381699       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0628 19:39:07.383205       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0628 19:39:08.213871       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0628 19:39:08.426825       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0628 19:39:08.451217       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0628 19:39:08.463578       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0628 19:39:08.573134       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0628 19:39:08.590273       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0628 19:39:08.682781       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0628 19:39:10.674062       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0628 19:39:11.475853       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0628 19:39:11.488040       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Sun 2020-06-28 19:38:40 UTC, end at Sun 2020-06-28 20:10:45 UTC. --
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.372327    2189 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-vn54p" (UniqueName: "kubernetes.io/secret/8135b56e-f409-49c0-9ec0-5185bd075024-default-token-vn54p") pod "hello-deploy-79f969cff6-6dbln" (UID: "8135b56e-f409-49c0-9ec0-5185bd075024")
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.372389    2189 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-vn54p" (UniqueName: "kubernetes.io/secret/630b4668-310c-4350-9ff8-da43678b380f-default-token-vn54p") pod "hello-deploy-79f969cff6-p7zxn" (UID: "630b4668-310c-4350-9ff8-da43678b380f")
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.372608    2189 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-vn54p" (UniqueName: "kubernetes.io/secret/f429e3b2-2b21-423d-9286-adbe6ae4d830-default-token-vn54p") pod "hello-deploy-79f969cff6-8s6kd" (UID: "f429e3b2-2b21-423d-9286-adbe6ae4d830")
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.372670    2189 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-vn54p" (UniqueName: "kubernetes.io/secret/8ebbd4b1-5f5a-4b11-b941-8d06f27a2af6-default-token-vn54p") pod "hello-deploy-79f969cff6-rll7k" (UID: "8ebbd4b1-5f5a-4b11-b941-8d06f27a2af6")
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.372720    2189 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-vn54p" (UniqueName: "kubernetes.io/secret/3d3b7a0d-352c-4f5c-aff7-3d54bd9811e9-default-token-vn54p") pod "hello-deploy-79f969cff6-tj28f" (UID: "3d3b7a0d-352c-4f5c-aff7-3d54bd9811e9")
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.464896    2189 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.467178    2189 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.468889    2189 topology_manager.go:233] [topologymanager] Topology Admit Handler
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.578444    2189 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-vn54p" (UniqueName: "kubernetes.io/secret/4e0cfb16-ca50-4ebb-84d6-6a1ed5207b51-default-token-vn54p") pod "hello-deploy-79f969cff6-czc97" (UID: "4e0cfb16-ca50-4ebb-84d6-6a1ed5207b51")
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.578515    2189 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-vn54p" (UniqueName: "kubernetes.io/secret/65ce05c7-c5ae-4fa5-9b84-9aef728456b1-default-token-vn54p") pod "hello-deploy-79f969cff6-j4zkx" (UID: "65ce05c7-c5ae-4fa5-9b84-9aef728456b1")
Jun 28 19:43:18 minikube kubelet[2189]: I0628 19:43:18.578558    2189 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-vn54p" (UniqueName: "kubernetes.io/secret/5aafc612-1b8d-4eb8-a92f-422c4197048f-default-token-vn54p") pod "hello-deploy-79f969cff6-jdmlc" (UID: "5aafc612-1b8d-4eb8-a92f-422c4197048f")
Jun 28 19:43:20 minikube kubelet[2189]: W0628 19:43:20.058011    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-p7zxn through plugin: invalid network status for
Jun 28 19:43:20 minikube kubelet[2189]: W0628 19:43:20.459995    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-tj28f through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.136702    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-6dbln through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.151610    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-8s6kd through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.241500    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-czc97 through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.363037    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-j4zkx through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.363037    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-jdmlc through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.501430    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-mdmcj through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.515957    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-4d4cf through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.534049    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-rll7k through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.534691    2189 pod_container_deletor.go:77] Container "a8e9171a22c4bf7786ffdab5c1d22ea2246e4f4ed3316d7ba57f1531feb1b0b2" not found in pod's containers
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.537963    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-mdmcj through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.539696    2189 pod_container_deletor.go:77] Container "a10a7102f0ecae8c0477e7a10a1b06cc0861cfd795c4c1fc47ffeadc5be9dca7" not found in pod's containers
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.542645    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-p7zxn through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.544901    2189 pod_container_deletor.go:77] Container "cfb4a294af9da01e8cdb5ecb158fe41ec6c57dacf53996ce2001e3605743c48b" not found in pod's containers
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.553173    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-6dbln through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.555223    2189 pod_container_deletor.go:77] Container "8e8211c3029e46aaae56fed17ccb9f8529eca4b777adeb711b1d182ce3997309" not found in pod's containers
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.557453    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-jdmlc through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.558850    2189 pod_container_deletor.go:77] Container "2895352b5447bcc9b4f15253d5ce45003af71579a080982f53babf2e4f750b78" not found in pod's containers
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.561247    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-j4zkx through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.563016    2189 pod_container_deletor.go:77] Container "180e089fb58ee6a50f138e291e251bdcea89cb6e68fe4db683e176f900e46cca" not found in pod's containers
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.565418    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-czc97 through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.567213    2189 pod_container_deletor.go:77] Container "d5eecceaf498e824c99735dcb60523eb478e9a62cbf65c7d92bb6fc40a9d70fb" not found in pod's containers
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.569450    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-tj28f through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.571502    2189 pod_container_deletor.go:77] Container "931a7f30054a6d5e0691dd539dc571d0e98b22ae0cc2d181cc8a128e1c1ccd83" not found in pod's containers
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.573813    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-8s6kd through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.575380    2189 pod_container_deletor.go:77] Container "669994359b4cb74c05261ae5805c512475f62edd4faecb2b771c9011e7e8effd" not found in pod's containers
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.578093    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-4d4cf through plugin: invalid network status for
Jun 28 19:43:21 minikube kubelet[2189]: W0628 19:43:21.579581    2189 pod_container_deletor.go:77] Container "92c76235bb6aabf1eb908526bbc3789c0b80f4938c6de36deff2f80d2f57b49a" not found in pod's containers
Jun 28 19:43:22 minikube kubelet[2189]: W0628 19:43:22.596451    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-6dbln through plugin: invalid network status for
Jun 28 19:43:22 minikube kubelet[2189]: W0628 19:43:22.602726    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-rll7k through plugin: invalid network status for
Jun 28 19:43:22 minikube kubelet[2189]: W0628 19:43:22.607315    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-4d4cf through plugin: invalid network status for
Jun 28 19:43:22 minikube kubelet[2189]: W0628 19:43:22.612018    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-j4zkx through plugin: invalid network status for
Jun 28 19:43:22 minikube kubelet[2189]: W0628 19:43:22.617099    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-tj28f through plugin: invalid network status for
Jun 28 19:43:22 minikube kubelet[2189]: W0628 19:43:22.621581    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-mdmcj through plugin: invalid network status for
Jun 28 19:43:22 minikube kubelet[2189]: W0628 19:43:22.627090    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-8s6kd through plugin: invalid network status for
Jun 28 19:43:22 minikube kubelet[2189]: W0628 19:43:22.631888    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-jdmlc through plugin: invalid network status for
Jun 28 19:43:22 minikube kubelet[2189]: W0628 19:43:22.637124    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-p7zxn through plugin: invalid network status for
Jun 28 19:43:22 minikube kubelet[2189]: W0628 19:43:22.642548    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-czc97 through plugin: invalid network status for
Jun 28 19:43:42 minikube kubelet[2189]: W0628 19:43:42.830527    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-p7zxn through plugin: invalid network status for
Jun 28 19:43:42 minikube kubelet[2189]: W0628 19:43:42.838568    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-tj28f through plugin: invalid network status for
Jun 28 19:43:43 minikube kubelet[2189]: W0628 19:43:43.859982    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-6dbln through plugin: invalid network status for
Jun 28 19:43:44 minikube kubelet[2189]: W0628 19:43:44.845911    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-8s6kd through plugin: invalid network status for
Jun 28 19:43:45 minikube kubelet[2189]: W0628 19:43:45.871809    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-czc97 through plugin: invalid network status for
Jun 28 19:43:46 minikube kubelet[2189]: W0628 19:43:46.892943    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-j4zkx through plugin: invalid network status for
Jun 28 19:43:47 minikube kubelet[2189]: W0628 19:43:47.914338    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-jdmlc through plugin: invalid network status for
Jun 28 19:43:48 minikube kubelet[2189]: W0628 19:43:48.935719    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-mdmcj through plugin: invalid network status for
Jun 28 19:43:49 minikube kubelet[2189]: W0628 19:43:49.957677    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-4d4cf through plugin: invalid network status for
Jun 28 19:43:50 minikube kubelet[2189]: W0628 19:43:50.982694    2189 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-deploy-79f969cff6-rll7k through plugin: invalid network status for

==> storage-provisioner [1a3522fb1ac2] <==
afbjorklund commented 4 years ago

This is normal (with the docker driver), you will have to use "minikube tunnel" or "kubectl port-forward" to access the application.

With other drivers, you will get a real minikube ip instead of the bogus 127.0.0.1 (localhost) which only works for some ports.

l0n3star commented 4 years ago

I ran minikube tunnel but still not sure how to get nodePort to work. This is my service:

k8s kubectl get svc            
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
hello-svc    NodePort    10.110.6.68   <none>        8080:30001/TCP   70m
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP          4h31m

I tried curl 10.110.6.68:30001 but get no response. A ping to that IP gives request timeout:

k8s ping 10.110.6.68           
PING 10.110.6.68 (10.110.6.68): 56 data bytes
Request timeout for icmp_seq 0

Or maybe this only works for LoadBalancer service ?

l0n3star commented 4 years ago

Switching to docker desktop with k8s.