kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.49k stars 4.89k forks source link

Add documentation for pulling images from private registries #6354

Closed dkorittki closed 4 years ago

dkorittki commented 4 years ago

minikube can't pull image from private repository despite a working docker config containing valid auth information. Reason: access denied. A minikube ssh 'docker pull my-registry.foo.bar/my/private/image:latest' works as expected.

The exact command to reproduce the issue:

minikube start
minikube ssh 'docker login my-registry.foo.bar -u my-user -p my-pass'

cat << EOF > /tmp/testpod.yaml                                                                                                                                                                                                     apiVersion: v1
kind: Pod
metadata:
  name: private-image-test-1
spec:
  containers:
    - name: uses-private-image
      image: my-registry.foo.bar/my/private/image:latest
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]
EOF

kubectl apply -f /tmp/testpod.yaml
k describe pod private-image-test-1 | grep Failed

The full output of the command that failed:

 Warning  Failed     9s    kubelet, minikube  Failed to pull image "my-registry.foo.bar/my/private/image:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden
 Warning  Failed     9s    kubelet, minikube  Error: ErrImagePull
 Warning  Failed     8s    kubelet, minikube  Error: ImagePullBackOff

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Mon 2020-01-20 15:11:21 UTC, end at Mon 2020-01-20 15:23:37 UTC. --
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.814529881Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.815344990Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.815379345Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.815392572Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.815403769Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.829672312Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.829768798Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.829807998Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.829842938Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.829876959Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.829910240Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.830099304Z" level=info msg="Loading containers: start."
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.897188584Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.941207328Z" level=info msg="Loading containers: done."
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.956170476Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.956412153Z" level=info msg="Daemon has completed initialization"
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.971006970Z" level=info msg="API listen on /var/run/docker.sock"
Jan 20 15:11:26 minikube systemd[1]: Started Docker Application Container Engine.
Jan 20 15:11:26 minikube dockerd[1975]: time="2020-01-20T15:11:26.971285508Z" level=info msg="API listen on [::]:2376"
Jan 20 15:12:06 minikube dockerd[1975]: time="2020-01-20T15:12:06.576201352Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a664b46e773c6a31634006b9530bd5e6d39855a134959ad34a62e917d898b2f3/shim.sock" debug=false pid=3551
Jan 20 15:12:06 minikube dockerd[1975]: time="2020-01-20T15:12:06.586068936Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/03f1e0d2204588719c158d2604666ee36bfeacc8b39127b8ddc22f4884736bb7/shim.sock" debug=false pid=3558
Jan 20 15:12:06 minikube dockerd[1975]: time="2020-01-20T15:12:06.587451141Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/202f80155496494d854ebab1dfbeac8cde7659cdcff305ed7d7b4169ee15fc8c/shim.sock" debug=false pid=3563
Jan 20 15:12:06 minikube dockerd[1975]: time="2020-01-20T15:12:06.666570793Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/142f79615f9f3cb1c929249d75bd4b18b4b71c59bf36b67b21d78da382e14124/shim.sock" debug=false pid=3632
Jan 20 15:12:06 minikube dockerd[1975]: time="2020-01-20T15:12:06.669606262Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/25824f22fde8106a3b04f38329d6e575f0d3cbbece7e153c32b727a6d5b3bd86/shim.sock" debug=false pid=3633
Jan 20 15:12:06 minikube dockerd[1975]: time="2020-01-20T15:12:06.930503948Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a4bada8e796eff0bf84c8f9c491fd4a07f0f3a3be589ac52e527ddad4a0f29be/shim.sock" debug=false pid=3796
Jan 20 15:12:06 minikube dockerd[1975]: time="2020-01-20T15:12:06.935903139Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/451333d678ac2eb3ac944421e3e9ea5ec04cdc51b3c9c8481a9e94bb091e6feb/shim.sock" debug=false pid=3801
Jan 20 15:12:07 minikube dockerd[1975]: time="2020-01-20T15:12:07.025304215Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b3057224a92746fe6ba7e8bc5c22b669334cc060960d779f7089da18cfe78c26/shim.sock" debug=false pid=3849
Jan 20 15:12:07 minikube dockerd[1975]: time="2020-01-20T15:12:07.034690275Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2db22c8ff41f9636e8e3cf703d96bf4580b625ff4c079f3b5a17b417df270250/shim.sock" debug=false pid=3863
Jan 20 15:12:07 minikube dockerd[1975]: time="2020-01-20T15:12:07.060221269Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b055f13137f039b66b868e236ae208544e048c819ccb411fc5604667aea2f8e0/shim.sock" debug=false pid=3891
Jan 20 15:12:23 minikube dockerd[1975]: time="2020-01-20T15:12:23.282572724Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1e34df9d8169548d16c6975ca62f43e0c21fc3b79dd5401d1d45eb1b8ce93015/shim.sock" debug=false pid=4797
Jan 20 15:12:23 minikube dockerd[1975]: time="2020-01-20T15:12:23.466857490Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ddcf35e89f56896cf0f2131c733aee3b8a2c649ebe3e01a01c9f695328a2931d/shim.sock" debug=false pid=4846
Jan 20 15:12:26 minikube dockerd[1975]: time="2020-01-20T15:12:26.912745897Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ff9c00fbf0427549e9840ff6a8be4969ba2ff46db18086ff3b688d1fce51f6af/shim.sock" debug=false pid=4987
Jan 20 15:12:27 minikube dockerd[1975]: time="2020-01-20T15:12:27.145820996Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/adf289b71ec9875b5c1c9d4b42e5a98b69c758e8917b78e9f34bb171cc2ea19a/shim.sock" debug=false pid=5050
Jan 20 15:12:27 minikube dockerd[1975]: time="2020-01-20T15:12:27.957878324Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2dcfd1424eebe97d50dc0b9673aa7b29c4b6e23936e7ac5ade6067d17f2a14c5/shim.sock" debug=false pid=5148
Jan 20 15:12:27 minikube dockerd[1975]: time="2020-01-20T15:12:27.975116981Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/39e9dc90946d4bfc742161fd79eb1a812381481bde4a89cf84b450961c111ad7/shim.sock" debug=false pid=5157
Jan 20 15:12:28 minikube dockerd[1975]: time="2020-01-20T15:12:28.137194082Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c08b445dd4cd7da13413ed1093a9053e5d8dec605ad231082adcaa3c3bb9d371/shim.sock" debug=false pid=5220
Jan 20 15:12:28 minikube dockerd[1975]: time="2020-01-20T15:12:28.196729879Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ee721e22a85d9706d9b5cc7ae08edce2a5cff2b1c045542577d4fbdff7a8326a/shim.sock" debug=false pid=5257
Jan 20 15:12:28 minikube dockerd[1975]: time="2020-01-20T15:12:28.415183133Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6423e2ae2b217abce634a03e99542ee0b2b0be43f7f6cd2e7a7dedf506c87ec5/shim.sock" debug=false pid=5352
Jan 20 15:12:28 minikube dockerd[1975]: time="2020-01-20T15:12:28.507279752Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dd775fa4cdb7cd3a3ebd2774342c444d514c8948051f9d0823cdda6fbd536e63/shim.sock" debug=false pid=5386
Jan 20 15:12:28 minikube dockerd[1975]: time="2020-01-20T15:12:28.783358692Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4811991f83aeca0ad3a3d8d2c7e593288749b2ede1efdcf23c0fbb332ebceff2/shim.sock" debug=false pid=5484
Jan 20 15:12:29 minikube dockerd[1975]: time="2020-01-20T15:12:29.010337496Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/98948c288d15a4a1a45d2a57409d7e577e1b948254478038ad6617c96aa9c9e5/shim.sock" debug=false pid=5559
Jan 20 15:21:08 minikube dockerd[1975]: time="2020-01-20T15:21:08.625881642Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0fcbcb66a6786f4e9edf0612356e985c44979046239a020fbe0e372b34723a50/shim.sock" debug=false pid=10916
Jan 20 15:22:01 minikube dockerd[1975]: time="2020-01-20T15:22:01.125210855Z" level=info msg="shim reaped" id=0fcbcb66a6786f4e9edf0612356e985c44979046239a020fbe0e372b34723a50
Jan 20 15:22:01 minikube dockerd[1975]: time="2020-01-20T15:22:01.135474331Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 20 15:22:08 minikube dockerd[1975]: time="2020-01-20T15:22:08.174143493Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0829df2af62249fa9ccae4a0695ec611cd7017e19c0d916e91869db80240d55a/shim.sock" debug=false pid=11637
Jan 20 15:22:08 minikube dockerd[1975]: time="2020-01-20T15:22:08.460732384Z" level=info msg="Attempting next endpoint for pull after error: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden"
Jan 20 15:22:08 minikube dockerd[1975]: time="2020-01-20T15:22:08.461486369Z" level=error msg="Handler for POST /images/create returned error: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden"
Jan 20 15:22:21 minikube dockerd[1975]: time="2020-01-20T15:22:21.123445321Z" level=info msg="Attempting next endpoint for pull after error: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden"
Jan 20 15:22:21 minikube dockerd[1975]: time="2020-01-20T15:22:21.123601897Z" level=error msg="Handler for POST /images/create returned error: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden"
Jan 20 15:22:49 minikube dockerd[1975]: time="2020-01-20T15:22:49.138629242Z" level=info msg="Attempting next endpoint for pull after error: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden"
Jan 20 15:22:49 minikube dockerd[1975]: time="2020-01-20T15:22:49.139344136Z" level=error msg="Handler for POST /images/create returned error: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden"

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
98948c288d15a       3b08661dc379d       11 minutes ago      Running             dashboard-metrics-scraper   0                   c08b445dd4cd7
4811991f83aec       eb51a35975256       11 minutes ago      Running             kubernetes-dashboard        0                   ee721e22a85d9
dd775fa4cdb7c       70f311871ae12       11 minutes ago      Running             coredns                     0                   2dcfd1424eebe
6423e2ae2b217       4689081edb103       11 minutes ago      Running             storage-provisioner         0                   39e9dc90946d4
adf289b71ec98       70f311871ae12       11 minutes ago      Running             coredns                     0                   ff9c00fbf0427
ddcf35e89f568       7d54289267dc5       11 minutes ago      Running             kube-proxy                  0                   1e34df9d81695
b055f13137f03       78c190f736b11       11 minutes ago      Running             kube-scheduler              0                   142f79615f9f3
2db22c8ff41f9       303ce5db0e90d       11 minutes ago      Running             etcd                        0                   25824f22fde81
b3057224a9274       bd12a212f9dcb       11 minutes ago      Running             kube-addon-manager          0                   03f1e0d220458
451333d678ac2       0cae8d5cc64c7       11 minutes ago      Running             kube-apiserver              0                   202f801554964
a4bada8e796ef       5eb3b74868724       11 minutes ago      Running             kube-controller-manager     0                   a664b46e773c6

==> coredns ["adf289b71ec9"] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> coredns ["dd775fa4cdb7"] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> dmesg <==
[Jan20 15:11] ERROR: earlyprintk= earlyser already used
[  +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.005182] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20180810/tbprint-177)
[ +17.491158] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[  +0.006986] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +2.063770] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.007918] systemd-fstab-generator[1096]: Ignoring "noauto" for root device
[  +0.001273] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.884643] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +0.536653] vboxguest: loading out-of-tree module taints kernel.
[  +0.003131] vboxguest: PCI device not found, probably running on physical hardware.
[  +1.411168] systemd-fstab-generator[1883]: Ignoring "noauto" for root device
[ +23.938914] systemd-fstab-generator[2717]: Ignoring "noauto" for root device
[  +6.294363] systemd-fstab-generator[3032]: Ignoring "noauto" for root device
[Jan20 15:12] kauditd_printk_skb: 68 callbacks suppressed
[  +8.756901] systemd-fstab-generator[4434]: Ignoring "noauto" for root device
[  +9.194225] kauditd_printk_skb: 29 callbacks suppressed
[  +6.538641] kauditd_printk_skb: 50 callbacks suppressed
[Jan20 15:13] NFSD: Unable to end grace period: -110
[Jan20 15:21] kauditd_printk_skb: 14 callbacks suppressed

==> kernel <==
 15:23:37 up 12 min,  0 users,  load average: 1.39, 0.92, 0.57
Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.7"

==> kube-addon-manager ["b3057224a927"] <==
error: no objects passed to apply
error: no objects passed to apply
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-20T15:23:26+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-20T15:23:27+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-20T15:23:32+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-20T15:23:32+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-20T15:23:36+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-20T15:23:37+00:00 ==
INFO: == Reconciling with deprecated label ==

==> kube-apiserver ["451333d678ac"] <==
W0120 15:12:09.527349       1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0120 15:12:09.533841       1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0120 15:12:09.545993       1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0120 15:12:09.548187       1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0120 15:12:09.557695       1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0120 15:12:09.575009       1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0120 15:12:09.575047       1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0120 15:12:09.583137       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0120 15:12:09.583150       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0120 15:12:09.584607       1 client.go:361] parsed scheme: "endpoint"
I0120 15:12:09.584647       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0120 15:12:09.591796       1 client.go:361] parsed scheme: "endpoint"
I0120 15:12:09.591834       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0120 15:12:11.401551       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0120 15:12:11.401656       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0120 15:12:11.401990       1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0120 15:12:11.402159       1 secure_serving.go:178] Serving securely on [::]:8443
I0120 15:12:11.402482       1 available_controller.go:386] Starting AvailableConditionController
I0120 15:12:11.402544       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0120 15:12:11.402818       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0120 15:12:11.403160       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0120 15:12:11.403222       1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I0120 15:12:11.403363       1 autoregister_controller.go:140] Starting autoregister controller
I0120 15:12:11.403421       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0120 15:12:11.404351       1 crd_finalizer.go:263] Starting CRDFinalizer
I0120 15:12:11.404472       1 controller.go:81] Starting OpenAPI AggregationController
I0120 15:12:11.404578       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0120 15:12:11.404693       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0120 15:12:11.405100       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0120 15:12:11.405143       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0120 15:12:11.408336       1 controller.go:85] Starting OpenAPI controller
I0120 15:12:11.415117       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0120 15:12:11.415190       1 naming_controller.go:288] Starting NamingConditionController
I0120 15:12:11.415219       1 establishing_controller.go:73] Starting EstablishingController
I0120 15:12:11.415227       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0120 15:12:11.415285       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0120 15:12:11.408694       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0120 15:12:11.415684       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
E0120 15:12:11.480498       1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.8, ResourceVersion: 0, AdditionalErrorMsg:
I0120 15:12:11.506681       1 cache.go:39] Caches are synced for autoregister controller
I0120 15:12:11.516490       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0120 15:12:11.603113       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0120 15:12:11.603535       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
I0120 15:12:11.608221       1 shared_informer.go:204] Caches are synced for crd-autoregister
I0120 15:12:12.401454       1 controller.go:107] OpenAPI AggregationController: Processing item
I0120 15:12:12.401489       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0120 15:12:12.401583       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0120 15:12:12.409704       1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0120 15:12:12.415723       1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0120 15:12:12.415757       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0120 15:12:12.865075       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0120 15:12:12.920919       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0120 15:12:12.996258       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.64.8]
I0120 15:12:12.996949       1 controller.go:606] quota admission added evaluator for: endpoints
I0120 15:12:13.621343       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0120 15:12:14.665753       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0120 15:12:14.679945       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0120 15:12:14.854682       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0120 15:12:22.002797       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0120 15:12:22.420385       1 controller.go:606] quota admission added evaluator for: replicasets.apps

==> kube-controller-manager ["a4bada8e796e"] <==
I0120 15:12:21.744445       1 controllermanager.go:533] Started "garbagecollector"
I0120 15:12:21.744958       1 graph_builder.go:282] GraphBuilder running
I0120 15:12:21.768652       1 controllermanager.go:533] Started "disruption"
I0120 15:12:21.768923       1 disruption.go:330] Starting disruption controller
I0120 15:12:21.768951       1 shared_informer.go:197] Waiting for caches to sync for disruption
I0120 15:12:21.780818       1 node_lifecycle_controller.go:388] Sending events to api server.
I0120 15:12:21.781157       1 node_lifecycle_controller.go:423] Controller is using taint based evictions.
I0120 15:12:21.781242       1 taint_manager.go:162] Sending events to api server.
I0120 15:12:21.781338       1 node_lifecycle_controller.go:520] Controller will reconcile labels.
I0120 15:12:21.781512       1 controllermanager.go:533] Started "nodelifecycle"
I0120 15:12:21.782133       1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0120 15:12:21.782754       1 node_lifecycle_controller.go:554] Starting node controller
I0120 15:12:21.784022       1 shared_informer.go:197] Waiting for caches to sync for taint
W0120 15:12:21.814294       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0120 15:12:21.832775       1 shared_informer.go:204] Caches are synced for certificate-csrapproving
I0120 15:12:21.834536       1 shared_informer.go:204] Caches are synced for PV protection
I0120 15:12:21.834936       1 shared_informer.go:204] Caches are synced for bootstrap_signer
I0120 15:12:21.882982       1 shared_informer.go:204] Caches are synced for certificate-csrsigning
I0120 15:12:21.885499       1 shared_informer.go:204] Caches are synced for TTL
I0120 15:12:21.888709       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I0120 15:12:21.891596       1 shared_informer.go:204] Caches are synced for service account
I0120 15:12:21.901723       1 shared_informer.go:204] Caches are synced for namespace
E0120 15:12:21.922902       1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0120 15:12:21.991929       1 shared_informer.go:204] Caches are synced for GC
I0120 15:12:21.998667       1 shared_informer.go:204] Caches are synced for daemon sets
I0120 15:12:22.013070       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"56f0ad1d-4152-46a4-91b9-6fa81acbb891", APIVersion:"apps/v1", ResourceVersion:"186", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-bfzf5
E0120 15:12:22.031413       1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"56f0ad1d-4152-46a4-91b9-6fa81acbb891", ResourceVersion:"186", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715129934, loc:(*time.Location)(0x6b951c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0015a7520), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0016291c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0015a7540), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0015a7560), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.17.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0015a75a0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001460460), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00162d688), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0016cb5c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e1a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00162d6c8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0120 15:12:22.032464       1 shared_informer.go:204] Caches are synced for job
I0120 15:12:22.032507       1 shared_informer.go:204] Caches are synced for HPA
I0120 15:12:22.071405       1 shared_informer.go:204] Caches are synced for ReplicationController
I0120 15:12:22.087162       1 shared_informer.go:204] Caches are synced for taint
I0120 15:12:22.087274       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0120 15:12:22.088719       1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0120 15:12:22.088802       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0120 15:12:22.088889       1 taint_manager.go:186] Starting NoExecuteTaintManager
I0120 15:12:22.091581       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"16b09aea-e01f-4d3b-be7f-d3822cde659d", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0120 15:12:22.134710       1 shared_informer.go:204] Caches are synced for endpoint
I0120 15:12:22.284742       1 shared_informer.go:204] Caches are synced for expand
I0120 15:12:22.311066       1 shared_informer.go:204] Caches are synced for stateful set
I0120 15:12:22.333001       1 shared_informer.go:204] Caches are synced for PVC protection
I0120 15:12:22.334725       1 shared_informer.go:204] Caches are synced for persistent volume
I0120 15:12:22.338363       1 shared_informer.go:204] Caches are synced for attach detach
I0120 15:12:22.344736       1 shared_informer.go:204] Caches are synced for garbage collector
I0120 15:12:22.344884       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0120 15:12:22.370369       1 shared_informer.go:204] Caches are synced for disruption
I0120 15:12:22.370404       1 disruption.go:338] Sending events to api server.
I0120 15:12:22.381294       1 shared_informer.go:204] Caches are synced for resource quota
I0120 15:12:22.382332       1 shared_informer.go:204] Caches are synced for resource quota
I0120 15:12:22.418249       1 shared_informer.go:204] Caches are synced for deployment
I0120 15:12:22.428097       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c275d175-7a3e-46cc-97cf-7c6d08370790", APIVersion:"apps/v1", ResourceVersion:"181", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
I0120 15:12:22.431097       1 shared_informer.go:204] Caches are synced for ReplicaSet
I0120 15:12:22.442612       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"6c275dce-4e6d-4738-a01b-d452c7161198", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-qgtdx
I0120 15:12:22.457981       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"6c275dce-4e6d-4738-a01b-d452c7161198", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-t6fgm
I0120 15:12:23.237044       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0120 15:12:23.237399       1 shared_informer.go:204] Caches are synced for garbage collector
I0120 15:12:27.089085       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0120 15:12:27.720506       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"9329b75a-a461-4f5a-a514-0d5e82857ee2", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-7b64584c5c to 1
I0120 15:12:27.722460       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7b64584c5c", UID:"7a574ec0-c03f-4d51-8c3f-028af2f28941", APIVersion:"apps/v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-7b64584c5c-q8rvq
I0120 15:12:27.750319       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"82bcd3f3-8601-4927-b8e0-0e83c0a739c1", APIVersion:"apps/v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-79d9cd965 to 1
I0120 15:12:27.757679       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-79d9cd965", UID:"2d74266d-ecbf-4d87-862d-22ff243adbf2", APIVersion:"apps/v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-79d9cd965-7gtc7

==> kube-proxy ["ddcf35e89f56"] <==
W0120 15:12:23.732108       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0120 15:12:23.746330       1 node.go:135] Successfully retrieved node IP: 192.168.64.8
I0120 15:12:23.746509       1 server_others.go:145] Using iptables Proxier.
W0120 15:12:23.747182       1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0120 15:12:23.747460       1 server.go:571] Version: v1.17.0
I0120 15:12:23.749092       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0120 15:12:23.749265       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0120 15:12:23.749631       1 conntrack.go:83] Setting conntrack hashsize to 32768
I0120 15:12:23.754284       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0120 15:12:23.754476       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0120 15:12:23.755647       1 config.go:313] Starting service config controller
I0120 15:12:23.755938       1 shared_informer.go:197] Waiting for caches to sync for service config
I0120 15:12:23.760505       1 config.go:131] Starting endpoints config controller
I0120 15:12:23.761258       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0120 15:12:23.856511       1 shared_informer.go:204] Caches are synced for service config
I0120 15:12:23.862448       1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-scheduler ["b055f13137f0"] <==
I0120 15:12:07.984275       1 serving.go:312] Generated self-signed cert in-memory
W0120 15:12:08.579578       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0120 15:12:08.579764       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0120 15:12:11.471454       1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0120 15:12:11.471490       1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0120 15:12:11.471498       1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W0120 15:12:11.471502       1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W0120 15:12:11.510333       1 authorization.go:47] Authorization is disabled
W0120 15:12:11.510421       1 authentication.go:92] Authentication is disabled
I0120 15:12:11.510482       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0120 15:12:11.512361       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0120 15:12:11.512648       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0120 15:12:11.512693       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 15:12:11.514407       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0120 15:12:11.520645       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 15:12:11.521069       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 15:12:11.521288       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 15:12:11.522015       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 15:12:11.524946       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 15:12:11.525249       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 15:12:11.525457       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 15:12:11.525631       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 15:12:11.525846       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 15:12:11.526055       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 15:12:11.526226       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 15:12:11.526837       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 15:12:12.522016       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 15:12:12.527922       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 15:12:12.528720       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0120 15:12:12.533312       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 15:12:12.533598       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 15:12:12.534998       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 15:12:12.535410       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 15:12:12.536968       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 15:12:12.537198       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 15:12:12.539524       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 15:12:12.540468       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 15:12:12.542130       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0120 15:12:13.614072       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0120 15:12:13.614651       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0120 15:12:13.623814       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0120 15:12:22.455368       1 factory.go:494] pod is already present in the activeQ
E0120 15:12:22.490838       1 factory.go:494] pod is already present in the activeQ
E0120 15:12:23.183434       1 factory.go:494] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Mon 2020-01-20 15:11:21 UTC, end at Mon 2020-01-20 15:23:38 UTC. --
Jan 20 15:12:27 minikube kubelet[4443]: I0120 15:12:27.686898    4443 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-jbvdz" (UniqueName: "kubernetes.io/secret/340d8750-f7e8-4502-b8b1-a649ecd368d6-storage-provisioner-token-jbvdz") pod "storage-provisioner" (UID: "340d8750-f7e8-4502-b8b1-a649ecd368d6")
Jan 20 15:12:27 minikube kubelet[4443]: I0120 15:12:27.686918    4443 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-rc6kl" (UniqueName: "kubernetes.io/secret/bf852af9-9067-4c8b-82fc-b8b5ef8a15e7-coredns-token-rc6kl") pod "coredns-6955765f44-t6fgm" (UID: "bf852af9-9067-4c8b-82fc-b8b5ef8a15e7")
Jan 20 15:12:27 minikube kubelet[4443]: I0120 15:12:27.887796    4443 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/3aaffa61-8ce1-4bb5-a13b-c366ac2b2cf4-tmp-volume") pod "dashboard-metrics-scraper-7b64584c5c-q8rvq" (UID: "3aaffa61-8ce1-4bb5-a13b-c366ac2b2cf4")
Jan 20 15:12:27 minikube kubelet[4443]: I0120 15:12:27.887850    4443 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-dssxq" (UniqueName: "kubernetes.io/secret/56ace6d7-4c78-494a-8a68-32ec225f82a0-kubernetes-dashboard-token-dssxq") pod "kubernetes-dashboard-79d9cd965-7gtc7" (UID: "56ace6d7-4c78-494a-8a68-32ec225f82a0")
Jan 20 15:12:27 minikube kubelet[4443]: I0120 15:12:27.887872    4443 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/56ace6d7-4c78-494a-8a68-32ec225f82a0-tmp-volume") pod "kubernetes-dashboard-79d9cd965-7gtc7" (UID: "56ace6d7-4c78-494a-8a68-32ec225f82a0")
Jan 20 15:12:27 minikube kubelet[4443]: I0120 15:12:27.887888    4443 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-dssxq" (UniqueName: "kubernetes.io/secret/3aaffa61-8ce1-4bb5-a13b-c366ac2b2cf4-kubernetes-dashboard-token-dssxq") pod "dashboard-metrics-scraper-7b64584c5c-q8rvq" (UID: "3aaffa61-8ce1-4bb5-a13b-c366ac2b2cf4")
Jan 20 15:12:28 minikube kubelet[4443]: W0120 15:12:28.397578    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-t6fgm through plugin: invalid network status for
Jan 20 15:12:28 minikube kubelet[4443]: W0120 15:12:28.731201    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-7gtc7 through plugin: invalid network status for
Jan 20 15:12:28 minikube kubelet[4443]: W0120 15:12:28.915395    4443 pod_container_deletor.go:75] Container "c08b445dd4cd7da13413ed1093a9053e5d8dec605ad231082adcaa3c3bb9d371" not found in pod's containers
Jan 20 15:12:28 minikube kubelet[4443]: W0120 15:12:28.915844    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-q8rvq through plugin: invalid network status for
Jan 20 15:12:28 minikube kubelet[4443]: W0120 15:12:28.917258    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-qgtdx through plugin: invalid network status for
Jan 20 15:12:28 minikube kubelet[4443]: W0120 15:12:28.935552    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-t6fgm through plugin: invalid network status for
Jan 20 15:12:28 minikube kubelet[4443]: W0120 15:12:28.951896    4443 pod_container_deletor.go:75] Container "2dcfd1424eebe97d50dc0b9673aa7b29c4b6e23936e7ac5ade6067d17f2a14c5" not found in pod's containers
Jan 20 15:12:28 minikube kubelet[4443]: W0120 15:12:28.958396    4443 pod_container_deletor.go:75] Container "39e9dc90946d4bfc742161fd79eb1a812381481bde4a89cf84b450961c111ad7" not found in pod's containers
Jan 20 15:12:28 minikube kubelet[4443]: W0120 15:12:28.967224    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-7gtc7 through plugin: invalid network status for
Jan 20 15:12:29 minikube kubelet[4443]: W0120 15:12:29.003207    4443 pod_container_deletor.go:75] Container "ee721e22a85d9706d9b5cc7ae08edce2a5cff2b1c045542577d4fbdff7a8326a" not found in pod's containers
Jan 20 15:12:30 minikube kubelet[4443]: W0120 15:12:30.017616    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-7gtc7 through plugin: invalid network status for
Jan 20 15:12:30 minikube kubelet[4443]: W0120 15:12:30.024708    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-q8rvq through plugin: invalid network status for
Jan 20 15:12:30 minikube kubelet[4443]: W0120 15:12:30.028982    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-t6fgm through plugin: invalid network status for
Jan 20 15:21:07 minikube kubelet[4443]: E0120 15:21:07.293062    4443 reflector.go:156] object-"default"/"default-token-x5xjt": Failed to list *v1.Secret: secrets "default-token-x5xjt" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "minikube" and this object
Jan 20 15:21:07 minikube kubelet[4443]: I0120 15:21:07.303963    4443 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-x5xjt" (UniqueName: "kubernetes.io/secret/25a0ba8b-5d19-4d62-bcc5-5661bde6ef18-default-token-x5xjt") pod "private-image-test-1" (UID: "25a0ba8b-5d19-4d62-bcc5-5661bde6ef18")
Jan 20 15:21:08 minikube kubelet[4443]: W0120 15:21:08.850638    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/private-image-test-1 through plugin: invalid network status for
Jan 20 15:21:09 minikube kubelet[4443]: E0120 15:21:09.168860    4443 remote_image.go:113] PullImage "my-registry.foo.bar/path/to/image:latest" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host
Jan 20 15:21:09 minikube kubelet[4443]: E0120 15:21:09.168994    4443 kuberuntime_image.go:50] Pull image "my-registry.foo.bar/path/to/image:latest" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host
Jan 20 15:21:09 minikube kubelet[4443]: E0120 15:21:09.169086    4443 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host
Jan 20 15:21:09 minikube kubelet[4443]: E0120 15:21:09.169162    4443 pod_workers.go:191] Error syncing pod 25a0ba8b-5d19-4d62-bcc5-5661bde6ef18 ("private-image-test-1_default(25a0ba8b-5d19-4d62-bcc5-5661bde6ef18)"), skipping: failed to "StartContainer" for "uses-private-image" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host"
Jan 20 15:21:09 minikube kubelet[4443]: W0120 15:21:09.465129    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/private-image-test-1 through plugin: invalid network status for
Jan 20 15:21:09 minikube kubelet[4443]: E0120 15:21:09.468828    4443 pod_workers.go:191] Error syncing pod 25a0ba8b-5d19-4d62-bcc5-5661bde6ef18 ("private-image-test-1_default(25a0ba8b-5d19-4d62-bcc5-5661bde6ef18)"), skipping: failed to "StartContainer" for "uses-private-image" with ImagePullBackOff: "Back-off pulling image \"my-registry.foo.bar/path/to/image\""
Jan 20 15:21:23 minikube kubelet[4443]: E0120 15:21:23.047503    4443 remote_image.go:113] PullImage "my-registry.foo.bar/path/to/image:latest" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host
Jan 20 15:21:23 minikube kubelet[4443]: E0120 15:21:23.047936    4443 kuberuntime_image.go:50] Pull image "my-registry.foo.bar/path/to/image:latest" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host
Jan 20 15:21:23 minikube kubelet[4443]: E0120 15:21:23.048143    4443 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host
Jan 20 15:21:23 minikube kubelet[4443]: E0120 15:21:23.048253    4443 pod_workers.go:191] Error syncing pod 25a0ba8b-5d19-4d62-bcc5-5661bde6ef18 ("private-image-test-1_default(25a0ba8b-5d19-4d62-bcc5-5661bde6ef18)"), skipping: failed to "StartContainer" for "uses-private-image" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host"
Jan 20 15:21:34 minikube kubelet[4443]: E0120 15:21:34.037683    4443 pod_workers.go:191] Error syncing pod 25a0ba8b-5d19-4d62-bcc5-5661bde6ef18 ("private-image-test-1_default(25a0ba8b-5d19-4d62-bcc5-5661bde6ef18)"), skipping: failed to "StartContainer" for "uses-private-image" with ImagePullBackOff: "Back-off pulling image \"my-registry.foo.bar/path/to/image\""
Jan 20 15:21:47 minikube kubelet[4443]: E0120 15:21:47.052791    4443 remote_image.go:113] PullImage "my-registry.foo.bar/path/to/image:latest" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host
Jan 20 15:21:47 minikube kubelet[4443]: E0120 15:21:47.053376    4443 kuberuntime_image.go:50] Pull image "my-registry.foo.bar/path/to/image:latest" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host
Jan 20 15:21:47 minikube kubelet[4443]: E0120 15:21:47.053486    4443 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host
Jan 20 15:21:47 minikube kubelet[4443]: E0120 15:21:47.053561    4443 pod_workers.go:191] Error syncing pod 25a0ba8b-5d19-4d62-bcc5-5661bde6ef18 ("private-image-test-1_default(25a0ba8b-5d19-4d62-bcc5-5661bde6ef18)"), skipping: failed to "StartContainer" for "uses-private-image" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/: dial tcp: lookup my-registry.foo.bar on 192.168.64.1:53: no such host"
Jan 20 15:21:59 minikube kubelet[4443]: I0120 15:21:59.469716    4443 reconciler.go:183] operationExecutor.UnmountVolume started for volume "default-token-x5xjt" (UniqueName: "kubernetes.io/secret/25a0ba8b-5d19-4d62-bcc5-5661bde6ef18-default-token-x5xjt") pod "25a0ba8b-5d19-4d62-bcc5-5661bde6ef18" (UID: "25a0ba8b-5d19-4d62-bcc5-5661bde6ef18")
Jan 20 15:21:59 minikube kubelet[4443]: I0120 15:21:59.480534    4443 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a0ba8b-5d19-4d62-bcc5-5661bde6ef18-default-token-x5xjt" (OuterVolumeSpecName: "default-token-x5xjt") pod "25a0ba8b-5d19-4d62-bcc5-5661bde6ef18" (UID: "25a0ba8b-5d19-4d62-bcc5-5661bde6ef18"). InnerVolumeSpecName "default-token-x5xjt". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jan 20 15:21:59 minikube kubelet[4443]: I0120 15:21:59.572291    4443 reconciler.go:303] Volume detached for volume "default-token-x5xjt" (UniqueName: "kubernetes.io/secret/25a0ba8b-5d19-4d62-bcc5-5661bde6ef18-default-token-x5xjt") on node "minikube" DevicePath ""
Jan 20 15:22:07 minikube kubelet[4443]: I0120 15:22:07.838430    4443 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-x5xjt" (UniqueName: "kubernetes.io/secret/d28007d7-db5c-4924-b3aa-52e875b49f93-default-token-x5xjt") pod "private-image-test-1" (UID: "d28007d7-db5c-4924-b3aa-52e875b49f93")
Jan 20 15:22:08 minikube kubelet[4443]: W0120 15:22:08.382784    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/private-image-test-1 through plugin: invalid network status for
Jan 20 15:22:08 minikube kubelet[4443]: E0120 15:22:08.462611    4443 remote_image.go:113] PullImage "my-registry.foo.bar/my/private/image:master" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden
Jan 20 15:22:08 minikube kubelet[4443]: E0120 15:22:08.462661    4443 kuberuntime_image.go:50] Pull image "my-registry.foo.bar/my/private/image:master" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden
Jan 20 15:22:08 minikube kubelet[4443]: E0120 15:22:08.462757    4443 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden
Jan 20 15:22:08 minikube kubelet[4443]: E0120 15:22:08.462793    4443 pod_workers.go:191] Error syncing pod d28007d7-db5c-4924-b3aa-52e875b49f93 ("private-image-test-1_default(d28007d7-db5c-4924-b3aa-52e875b49f93)"), skipping: failed to "StartContainer" for "uses-private-image" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden"
Jan 20 15:22:09 minikube kubelet[4443]: W0120 15:22:09.122406    4443 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/private-image-test-1 through plugin: invalid network status for
Jan 20 15:22:09 minikube kubelet[4443]: E0120 15:22:09.126846    4443 pod_workers.go:191] Error syncing pod d28007d7-db5c-4924-b3aa-52e875b49f93 ("private-image-test-1_default(d28007d7-db5c-4924-b3aa-52e875b49f93)"), skipping: failed to "StartContainer" for "uses-private-image" with ImagePullBackOff: "Back-off pulling image \"my-registry.foo.bar/my/private/image:master\""
Jan 20 15:22:21 minikube kubelet[4443]: E0120 15:22:21.124064    4443 remote_image.go:113] PullImage "my-registry.foo.bar/my/private/image:master" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden
Jan 20 15:22:21 minikube kubelet[4443]: E0120 15:22:21.124166    4443 kuberuntime_image.go:50] Pull image "my-registry.foo.bar/my/private/image:master" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden
Jan 20 15:22:21 minikube kubelet[4443]: E0120 15:22:21.124215    4443 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden
Jan 20 15:22:21 minikube kubelet[4443]: E0120 15:22:21.124239    4443 pod_workers.go:191] Error syncing pod d28007d7-db5c-4924-b3aa-52e875b49f93 ("private-image-test-1_default(d28007d7-db5c-4924-b3aa-52e875b49f93)"), skipping: failed to "StartContainer" for "uses-private-image" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden"
Jan 20 15:22:36 minikube kubelet[4443]: E0120 15:22:36.039137    4443 pod_workers.go:191] Error syncing pod d28007d7-db5c-4924-b3aa-52e875b49f93 ("private-image-test-1_default(d28007d7-db5c-4924-b3aa-52e875b49f93)"), skipping: failed to "StartContainer" for "uses-private-image" with ImagePullBackOff: "Back-off pulling image \"my-registry.foo.bar/my/private/image:master\""
Jan 20 15:22:49 minikube kubelet[4443]: E0120 15:22:49.140331    4443 remote_image.go:113] PullImage "my-registry.foo.bar/my/private/image:master" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden
Jan 20 15:22:49 minikube kubelet[4443]: E0120 15:22:49.140481    4443 kuberuntime_image.go:50] Pull image "my-registry.foo.bar/my/private/image:master" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden
Jan 20 15:22:49 minikube kubelet[4443]: E0120 15:22:49.140641    4443 kuberuntime_manager.go:803] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden
Jan 20 15:22:49 minikube kubelet[4443]: E0120 15:22:49.140748    4443 pod_workers.go:191] Error syncing pod d28007d7-db5c-4924-b3aa-52e875b49f93 ("private-image-test-1_default(d28007d7-db5c-4924-b3aa-52e875b49f93)"), skipping: failed to "StartContainer" for "uses-private-image" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://my-registry.foo.bar/v2/my/private/image/manifests/master: denied: access forbidden"
Jan 20 15:23:00 minikube kubelet[4443]: E0120 15:23:00.040331    4443 pod_workers.go:191] Error syncing pod d28007d7-db5c-4924-b3aa-52e875b49f93 ("private-image-test-1_default(d28007d7-db5c-4924-b3aa-52e875b49f93)"), skipping: failed to "StartContainer" for "uses-private-image" with ImagePullBackOff: "Back-off pulling image \"my-registry.foo.bar/my/private/image:master\""
Jan 20 15:23:15 minikube kubelet[4443]: E0120 15:23:15.042404    4443 pod_workers.go:191] Error syncing pod d28007d7-db5c-4924-b3aa-52e875b49f93 ("private-image-test-1_default(d28007d7-db5c-4924-b3aa-52e875b49f93)"), skipping: failed to "StartContainer" for "uses-private-image" with ImagePullBackOff: "Back-off pulling image \"my-registry.foo.bar/my/private/image:master\""
Jan 20 15:23:27 minikube kubelet[4443]: E0120 15:23:27.042485    4443 pod_workers.go:191] Error syncing pod d28007d7-db5c-4924-b3aa-52e875b49f93 ("private-image-test-1_default(d28007d7-db5c-4924-b3aa-52e875b49f93)"), skipping: failed to "StartContainer" for "uses-private-image" with ImagePullBackOff: "Back-off pulling image \"my-registry.foo.bar/my/private/image:master\""

==> kubernetes-dashboard ["4811991f83ae"] <==
2020/01/20 15:12:29 Using namespace: kubernetes-dashboard
2020/01/20 15:12:29 Starting overwatch
2020/01/20 15:12:29 Using in-cluster config to connect to apiserver
2020/01/20 15:12:29 Using secret token for csrf signing
2020/01/20 15:12:29 Initializing csrf token from kubernetes-dashboard-csrf secret
2020/01/20 15:12:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2020/01/20 15:12:29 Successful initial request to the apiserver, version: v1.17.0
2020/01/20 15:12:29 Generating JWE encryption key
2020/01/20 15:12:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2020/01/20 15:12:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2020/01/20 15:12:29 Initializing JWE encryption key from synchronized object
2020/01/20 15:12:29 Creating in-cluster Sidecar client
2020/01/20 15:12:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2020/01/20 15:12:29 Serving insecurely on HTTP port: 9090
2020/01/20 15:12:59 Successful request to sidecar

==> storage-provisioner ["6423e2ae2b21"] <==

The operating system version: MacOS 10.15.1 minikube version: v1.6.2 running with hyperkit driver

medyagh commented 4 years ago

so to clarify, you can pull the image with docker pull but not if you apply through a yaml file? do you mind trying with create a kubernetes secret that has docker user name password?

as seen here https://gist.github.com/srounce/4a0338b26df815e966174228753ef61e

partial code:


kubectl create secret docker-registry $SECRET_NAME  \
  --docker-server "https://gcr.io" \
  --docker-username _json_key \
  --docker-email not@val.id \
  --docker-password="`cat $CONFIG_PATH`" ${@:3}

Regardless if this solution helps, we need a bettter docuemnation of an example of using Private Registery in minikube on our website. I would be happy to review any pr with good examples of using different ways of minikube and private registery,

dkorittki commented 4 years ago

so to clarify, you can pull the image with docker pull but not if you apply through a yaml file?

Yes exactly. The idea is to authenticate the complete node (in this case the minikube vm) against the private registry, so you don't need to define imagePullSecrets on pods. But after some more playing around and rereading this documentation I eventually got it to work.

The problem was that minikube ssh "docker login ... places the resultingconfig.json in /home/docker/.docker, which is not read by kubelet when pulling images. Moving that file to one of the paths specified in mentioned documentation solved the problem.

do you mind trying with create a kubernetes secret that has docker user name password?

Tested this one too and it worked right out of the box, when referencing this secret via imagePullSecrets on the pod.

I would be happy to review any pr with good examples of using different ways of minikube and private registery,

Would you think an article on site/content/en/docs/Tutorials would be sufficient? If so, I would gladly help out there.

priyawadhwa commented 4 years ago

@dkorittki I'm glad this is working for you now! Since pulling from private registries is a pretty important and common use case, I think an article in Core Tasks titled "Pulling Images from Private Registries" would be great. Thank you for your help!

serhatcetinkaya commented 4 years ago

Hello, do you think this is enough or more details needed about how to use imagePullSecrets ? If so I would be happy to help :)

priyawadhwa commented 4 years ago

Hey @serhatcetinkaya thanks for pointing me to the docs! They actually look pretty comprehensive to me, so I'll go ahead and close this issue.

However, we're working on fixing up our documentation this week. If you'd be interested in helping out, here's a list of issues we need help with that you can take a look at:

https://github.com/kubernetes/minikube/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22+label%3A%22kind%2Fdocumentation%22+

hanweisen commented 2 years ago

Moving that file to one of the paths specified in mentioned documentation solved the problem.

Now this document does not contain any paths,can refer to https://stackoverflow.com/questions/60661249/kubernetes-ignores-config-json-while-doing-docker-pull