kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.49k stars 4.89k forks source link

Minikube pull image authentication fail #9161

Closed AndrewPanov closed 3 years ago

AndrewPanov commented 4 years ago

Steps to reproduce the issue:

  1. deploy nexus3 (sonatype/nexus3:3.25.0 image) container

  2. Setup nexus3 docker registry repository

  3. Apply http connector (i have 8083 port) image

  4. Push jar there (i have ci-cd project and so on) image

  5. start minikube with command minikube start

  6. add insecure registry ~/.docker/config.json 'nexus3:8083'; like that

    "HostOptions": {
        "Driver": "",
        "Memory": 0,
        "Disk": 0,
        "EngineOptions": {
            "ArbitraryFlags": null,
            "Dns": null,
            "GraphDir": "",
            "Env": [],
            "Ipv6": false,
            "InsecureRegistry": [
                "10.96.0.0/12",
                "nexus3:8083"
            ],
            "Labels": null,
            "LogLevel": "",
            "StorageDriver": "",
            "SelinuxEnabled": false,
            "TlsVerify": false,
            "RegistryMirror": null,
            "InstallURL": "https://get.docker.com"
        },

    minikube start --insecure-registry didnt work for me :(

  7. apply minikube configs with minikube start

  8. connect minikube to docker network with nexus container; in my case docker network connect docker_default minikube

  9. run docker exec -it minikube bash

  10. login in registry with docker login nexus3 -u {user} -p {pass}

ALL SETUP IS FINISHED

  1. Apply deployment with kubectl apply -f {path}/deployment.yaml btw deployment.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: ci-cd-deployment
    labels:
    app: ci-cd
    spec:
    replicas: 1
    selector:
    matchLabels:
      app: ci-cd
    template:
    metadata:
      labels:
        app: ci-cd
    spec:
      containers:
        - name: ci-cd
          image: nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT
          ports:
            - containerPort: 8090
  2. run minikube logs -f

  3. Read unsuccessful message:

    Sep 02 17:09:19 minikube kubelet[5467]: E0902 17:09:19.917581    5467 remote_image.go:113] PullImage "nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials
  4. in minikube container run cat ~/.docker/config.json

    {
    "auths": {
        "nexus3:8083": {
            "auth": *credentials in base64*
        }
    },
    "HttpHeaders": {
        "User-Agent": "Docker-Client/19.03.8 (linux)"
    }
    }

    And creds are the same as i put on docker login command

  5. Remove all deployments with kubectl delete deployments --all

  6. in minikube container run docker pull nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT

  7. See successful pulling start

  8. Fine. Pull whole image

  9. Rerun deployment outside with kubectl apply -f {path}/deployment.yaml

  10. See successful logs or run kubectl get deployments

Full output of minikube start command used, if not already included: šŸ˜„ minikube v1.12.2 on Darwin 10.15.5 āœØ Using the docker driver based on user configuration šŸ‘ Starting control plane node minikube in cluster minikube šŸ”„ Creating docker container (CPUs=2, Memory=4000MB) ... ā— This container is having trouble accessing https://k8s.gcr.io šŸ’” To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ šŸ³ Preparing Kubernetes v1.18.3 on Docker 19.03.8 ... šŸ”Ž Verifying Kubernetes components... šŸŒŸ Enabled addons: default-storageclass, storage-provisioner šŸ„ Done! kubectl is now configured to use "minikube"

ā— /usr/local/bin/kubectl is version 1.16.6-beta.0, which may be incompatible with Kubernetes 1.18.3. šŸ’” You can also use 'minikube kubectl -- get pods' to invoke a matching version

Optional: Full output of minikube logs command: Logs are in details

==> Docker <== -- Logs begin at Wed 2020-09-02 17:02:23 UTC, end at Wed 2020-09-02 17:55:03 UTC. -- Sep 02 17:04:00 minikube dockerd[3873]: time="2020-09-02T17:04:00.968126300Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.023658900Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.123523200Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.171281800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.193728300Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.364892100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.464766700Z" level=info msg="Removing stale sandbox 1fc614cd0f5684319d84757b97ad48c0beb4d25c4f4d6b0a87012cec72b89021 (2b3b7f3a871164a054cc8ff0a5476015478bf4187e9372f0aafe3491271ab26f)" Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.467912400Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b996549dd3d703e734614b19a100532c2cf99eefb12260667aa1c7b51bd0d349 fe728669f428920f891ed722d6be8e4845372a688b0533b385b41041964d3acd], retrying...." Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.679202500Z" level=info msg="Removing stale sandbox 2a4a7c685848d103b8389567ebf55cd7350c83a0a290f7050eb9fce1e8513982 (72524574fcedb045c71dc0c144c72a4b1917744903051d455c1d12c8d670b32f)" Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.681218100Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b996549dd3d703e734614b19a100532c2cf99eefb12260667aa1c7b51bd0d349 09b8e77721fa7782646bfbe43620c76f5fd7eb1dda08408ea58fc0819846100f], retrying...." Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.861294800Z" level=info msg="Removing stale sandbox 5861bd70b8e7da6ecee1502ae7702f70fcca0fd8e56684f76948400da6aac402 (2de8cbc225fc9dbb1cf0670ec52a3967fcc721ec7244766a45b483aef7969658)" Sep 02 17:04:01 minikube dockerd[3873]: time="2020-09-02T17:04:01.863294300Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b996549dd3d703e734614b19a100532c2cf99eefb12260667aa1c7b51bd0d349 649dedb9a42b0332780bfed92d370f712b0c12a4937dcee2cc74f73cb799dcd4], retrying...." Sep 02 17:04:02 minikube dockerd[3873]: time="2020-09-02T17:04:02.083520500Z" level=info msg="Removing stale sandbox c16553f09c00fefd1613556f237295fc56e5c91529e446c9f9d852dda1e0433f (6b0f5c402cb203764506f1766b4552b0dc2f3757168bfa3bca455d4a9f1dd2ad)" Sep 02 17:04:02 minikube dockerd[3873]: time="2020-09-02T17:04:02.085642300Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b996549dd3d703e734614b19a100532c2cf99eefb12260667aa1c7b51bd0d349 a716c8ae204f517d5af1c68bda68a0cb53bb4a12970112977aa28b3d3b5cad0d], retrying...." Sep 02 17:04:02 minikube dockerd[3873]: time="2020-09-02T17:04:02.290281700Z" level=info msg="Removing stale sandbox f0df332f85cfa562cf58125d744806a5a45ccc1a0fad4a4c01a8f882cfa36f45 (a3bec5c68afe735a76b4464ee04a46be9ffe224749970176d4955650fcb5a857)" Sep 02 17:04:02 minikube dockerd[3873]: time="2020-09-02T17:04:02.292785600Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b996549dd3d703e734614b19a100532c2cf99eefb12260667aa1c7b51bd0d349 8feb3c65e14deaaf01685f92d0a4d72a2ed63752cb9a93d3e63e184a8e21b96f], retrying...." Sep 02 17:04:02 minikube dockerd[3873]: time="2020-09-02T17:04:02.305877900Z" level=info msg="There are old running containers, the network config will not take affect" Sep 02 17:04:02 minikube dockerd[3873]: time="2020-09-02T17:04:02.325390700Z" level=info msg="Loading containers: done." Sep 02 17:04:02 minikube dockerd[3873]: time="2020-09-02T17:04:02.373961100Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Sep 02 17:04:02 minikube dockerd[3873]: time="2020-09-02T17:04:02.374823100Z" level=info msg="Daemon has completed initialization" Sep 02 17:04:02 minikube dockerd[3873]: time="2020-09-02T17:04:02.507272500Z" level=info msg="API listen on /var/run/docker.sock" Sep 02 17:04:02 minikube systemd[1]: Started Docker Application Container Engine. Sep 02 17:04:02 minikube dockerd[3873]: time="2020-09-02T17:04:02.507969000Z" level=info msg="API listen on [::]:2376" Sep 02 17:04:03 minikube dockerd[3873]: time="2020-09-02T17:04:03.730361500Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:07 minikube dockerd[3873]: time="2020-09-02T17:04:07.619403600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:07 minikube dockerd[3873]: time="2020-09-02T17:04:07.691362900Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:07 minikube dockerd[3873]: time="2020-09-02T17:04:07.699603300Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:07 minikube dockerd[3873]: time="2020-09-02T17:04:07.782391800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:07 minikube dockerd[3873]: time="2020-09-02T17:04:07.826400500Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:07 minikube dockerd[3873]: time="2020-09-02T17:04:07.903198600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:07 minikube dockerd[3873]: time="2020-09-02T17:04:07.906843600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:07 minikube dockerd[3873]: time="2020-09-02T17:04:07.915482100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:08 minikube dockerd[3873]: time="2020-09-02T17:04:08.766956000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:04:17 minikube dockerd[3873]: time="2020-09-02T17:04:17.250862300Z" level=info msg="Container 50623430be46ca82bf1a0dac02753ac52d2d7d7f62d2d5619e425b10a5ced1ad failed to exit within 10 seconds of signal 15 - using the force" Sep 02 17:04:17 minikube dockerd[3873]: time="2020-09-02T17:04:17.383089500Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:06:28 minikube dockerd[3873]: time="2020-09-02T17:06:28.439241700Z" level=info msg="Error logging in to v2 endpoint, trying next endpoint: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:07:14 minikube dockerd[3873]: time="2020-09-02T17:07:14.473085800Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" Sep 02 17:07:14 minikube dockerd[3873]: time="2020-09-02T17:07:14.473339600Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]" Sep 02 17:07:20 minikube dockerd[3873]: time="2020-09-02T17:07:20.239938700Z" level=warning msg="Error getting v2 registry: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:07:20 minikube dockerd[3873]: time="2020-09-02T17:07:20.240047400Z" level=info msg="Attempting next endpoint for pull after error: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:07:20 minikube dockerd[3873]: time="2020-09-02T17:07:20.252536100Z" level=info msg="Attempting next endpoint for pull after error: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:07:20 minikube dockerd[3873]: time="2020-09-02T17:07:20.252918600Z" level=error msg="Handler for POST /images/create returned error: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:07:37 minikube dockerd[3873]: time="2020-09-02T17:07:37.009767700Z" level=warning msg="Error getting v2 registry: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:07:37 minikube dockerd[3873]: time="2020-09-02T17:07:37.010797600Z" level=info msg="Attempting next endpoint for pull after error: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:07:37 minikube dockerd[3873]: time="2020-09-02T17:07:37.038130100Z" level=info msg="Attempting next endpoint for pull after error: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:07:37 minikube dockerd[3873]: time="2020-09-02T17:07:37.038207800Z" level=error msg="Handler for POST /images/create returned error: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:08:18 minikube dockerd[3873]: time="2020-09-02T17:08:18.332677800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:08:30 minikube dockerd[3873]: time="2020-09-02T17:08:30.587279800Z" level=warning msg="Error getting v2 registry: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:08:30 minikube dockerd[3873]: time="2020-09-02T17:08:30.587476600Z" level=info msg="Attempting next endpoint for pull after error: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:08:30 minikube dockerd[3873]: time="2020-09-02T17:08:30.608319900Z" level=info msg="Attempting next endpoint for pull after error: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:08:30 minikube dockerd[3873]: time="2020-09-02T17:08:30.608910300Z" level=error msg="Handler for POST /images/create returned error: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:09:19 minikube dockerd[3873]: time="2020-09-02T17:09:19.903810900Z" level=warning msg="Error getting v2 registry: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:09:19 minikube dockerd[3873]: time="2020-09-02T17:09:19.904366900Z" level=info msg="Attempting next endpoint for pull after error: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:09:19 minikube dockerd[3873]: time="2020-09-02T17:09:19.915846200Z" level=info msg="Attempting next endpoint for pull after error: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:09:19 minikube dockerd[3873]: time="2020-09-02T17:09:19.916474600Z" level=error msg="Handler for POST /images/create returned error: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:10:10 minikube dockerd[3873]: time="2020-09-02T17:10:10.019917300Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 02 17:10:29 minikube dockerd[3873]: time="2020-09-02T17:10:29.482727800Z" level=warning msg="Error getting v2 registry: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:10:29 minikube dockerd[3873]: time="2020-09-02T17:10:29.483031500Z" level=info msg="Attempting next endpoint for pull after error: Get https://nexus3:8083/v2/: http: server gave HTTP response to HTTPS client" Sep 02 17:11:37 minikube dockerd[3873]: time="2020-09-02T17:11:37.466890000Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" Sep 02 17:11:37 minikube dockerd[3873]: time="2020-09-02T17:11:37.467725800Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 55c653536c3bd 38eb961a95ba6 43 minutes ago Running ci-cd 0 ad3f9329ccc67 332f8cce1a9a2 9c3ca9f065bb1 46 minutes ago Running storage-provisioner 4 5b8b25ab1ca95 6959df28cb96c 9c3ca9f065bb1 50 minutes ago Exited storage-provisioner 3 5b8b25ab1ca95 216373be2c742 67da37a9a360e 50 minutes ago Running coredns 2 6ac2c751f67d2 34c06cf7fdee5 da26705ccb4b5 50 minutes ago Running kube-controller-manager 2 d2b4feb262031 e49f35ec797ce 7e28efa976bd1 50 minutes ago Running kube-apiserver 2 c26f918735ee3 8239d4161115e 303ce5db0e90d 50 minutes ago Running etcd 2 019c55c9e1227 0312798b59349 3439b7546f29b 50 minutes ago Running kube-proxy 2 cd7c56c878745 abab371696256 76216c34ed0c7 50 minutes ago Running kube-scheduler 2 46d50c62c7899 50623430be46c 67da37a9a360e 50 minutes ago Exited coredns 1 9dffe3d1f03d7 c80666a5197de da26705ccb4b5 50 minutes ago Exited kube-controller-manager 1 ee50bd4f99eb3 80a740a16f799 3439b7546f29b 51 minutes ago Exited kube-proxy 1 72524574fcedb 7ac3945e14eda 7e28efa976bd1 51 minutes ago Exited kube-apiserver 1 6b0f5c402cb20 6d6dd58214a92 303ce5db0e90d 51 minutes ago Exited etcd 1 a3bec5c68afe7 a6c2e21439ddf 76216c34ed0c7 51 minutes ago Exited kube-scheduler 1 2b3b7f3a87116 ==> coredns [216373be2c74] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b ==> coredns [50623430be46] <== [INFO] SIGTERM: Shutting down servers then terminating .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b [INFO] plugin/health: Going into lameduck mode for 5s [ERROR] plugin/errors: 2 4001849959105167556.8591881993295046490. HINFO: dial udp 192.168.65.1:53: connect: network is unreachable E0902 17:04:04.899206 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused E0902 17:04:04.900959 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused E0902 17:04:04.900973 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused E0902 17:04:05.900574 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused E0902 17:04:05.909346 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused E0902 17:04:05.910541 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused E0902 17:04:06.901621 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused E0902 17:04:06.911503 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused E0902 17:04:06.912013 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=be7c19d391302656d27f1f213657d925c4e1cfc2 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_09_02T20_02_59_0700 minikube.k8s.io/version=v1.12.2 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 02 Sep 2020 17:02:54 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Wed, 02 Sep 2020 17:55:04 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Wed, 02 Sep 2020 17:51:34 +0000 Wed, 02 Sep 2020 17:02:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 02 Sep 2020 17:51:34 +0000 Wed, 02 Sep 2020 17:02:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 02 Sep 2020 17:51:34 +0000 Wed, 02 Sep 2020 17:02:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 02 Sep 2020 17:51:34 +0000 Wed, 02 Sep 2020 17:03:11 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 172.17.0.3 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 65792556Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 6093312Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 65792556Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 6093312Ki pods: 110 System Info: Machine ID: 706dbba86b2548adbe8ac35506b6a528 System UUID: 8ddbfb16-e905-4486-8138-c4a41e017e74 Boot ID: d1eb709e-297c-4ba4-b1c4-eb02ae2f9248 Kernel Version: 4.19.76-linuxkit OS Image: Ubuntu 20.04 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.18.3 Kube-Proxy Version: v1.18.3 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default ci-cd-deployment-9bc4bffc9-tlpz8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m kube-system coredns-66bff467f8-9496b 100m (2%) 0 (0%) 70Mi (1%) 170Mi (2%) 52m kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 52m kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 52m kube-system kube-proxy-dgxwp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 52m kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (16%) 0 (0%) memory 70Mi (1%) 170Mi (2%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 52m kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 52m (x4 over 52m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 52m (x5 over 52m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 52m (x4 over 52m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 52m kubelet, minikube Updated Node Allocatable limit across pods Normal Starting 52m kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 52m kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 52m kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 52m kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeNotReady 52m kubelet, minikube Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 52m kubelet, minikube Updated Node Allocatable limit across pods Normal Starting 51m kube-proxy, minikube Starting kube-proxy. Normal NodeReady 51m kubelet, minikube Node minikube status is now: NodeReady Normal Starting 50m kubelet, minikube Starting kubelet. Normal NodeAllocatableEnforced 50m kubelet, minikube Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 50m (x8 over 50m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 50m (x8 over 50m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 50m (x7 over 50m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal Starting 50m kube-proxy, minikube Starting kube-proxy. ==> dmesg <== [ +0.006743] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.003028] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.008001] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000233] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000002] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [Aug31 06:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.011545] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000675] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.013948] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.015031] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.046700] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [Aug31 06:26] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.009097] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.022020] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.013210] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.020528] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [Aug31 09:27] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000014] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.009765] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.005762] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.002430] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000462] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000002] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.015416] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [Sep 2 17:03] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.012146] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.036647] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.004303] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000038] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.003681] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.007649] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000014] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000248] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000002] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.009140] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.018002] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. [ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. ==> etcd [6d6dd58214a9] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-09-02 17:03:56.303920 I | etcdmain: etcd Version: 3.4.3 2020-09-02 17:03:56.303966 I | etcdmain: Git SHA: 3cf2f69b5 2020-09-02 17:03:56.303983 I | etcdmain: Go Version: go1.12.12 2020-09-02 17:03:56.304215 I | etcdmain: Go OS/Arch: linux/amd64 2020-09-02 17:03:56.304229 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 2020-09-02 17:03:56.304522 N | etcdmain: the server is already initialized as member before, starting as etcd member... [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-09-02 17:03:56.304603 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-09-02 17:03:56.308946 I | embed: name = minikube 2020-09-02 17:03:56.308996 I | embed: data dir = /var/lib/minikube/etcd 2020-09-02 17:03:56.309014 I | embed: member dir = /var/lib/minikube/etcd/member 2020-09-02 17:03:56.309026 I | embed: heartbeat = 100ms 2020-09-02 17:03:56.309043 I | embed: election = 1000ms 2020-09-02 17:03:56.309076 I | embed: snapshot count = 10000 2020-09-02 17:03:56.309384 I | embed: advertise client URLs = https://172.17.0.3:2379 2020-09-02 17:03:56.309404 I | embed: initial advertise peer URLs = https://172.17.0.3:2380 2020-09-02 17:03:56.309440 I | embed: initial cluster = 2020-09-02 17:03:56.322865 I | etcdserver: restarting member b273bc7741bcb020 in cluster 86482fea2286a1d2 at commit index 454 raft2020/09/02 17:03:56 INFO: b273bc7741bcb020 switched to configuration voters=() raft2020/09/02 17:03:56 INFO: b273bc7741bcb020 became follower at term 2 raft2020/09/02 17:03:56 INFO: newRaft b273bc7741bcb020 [peers: [], term: 2, commit: 454, applied: 0, lastindex: 454, lastterm: 2] 2020-09-02 17:03:56.329758 W | auth: simple token is not cryptographically signed 2020-09-02 17:03:56.331320 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] 2020-09-02 17:03:56.333853 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-09-02 17:03:56.333985 I | embed: listening for peers on 172.17.0.3:2380 2020-09-02 17:03:56.334084 I | embed: listening for metrics on http://127.0.0.1:2381 raft2020/09/02 17:03:56 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) 2020-09-02 17:03:56.334658 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2 2020-09-02 17:03:56.334963 N | etcdserver/membership: set the initial cluster version to 3.4 2020-09-02 17:03:56.335027 I | etcdserver/api: enabled capabilities for version 3.4 raft2020/09/02 17:03:57 INFO: b273bc7741bcb020 is starting a new election at term 2 raft2020/09/02 17:03:57 INFO: b273bc7741bcb020 became candidate at term 3 raft2020/09/02 17:03:57 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 3 raft2020/09/02 17:03:57 INFO: b273bc7741bcb020 became leader at term 3 raft2020/09/02 17:03:57 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 3 2020-09-02 17:03:57.628098 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2 2020-09-02 17:03:57.628496 I | embed: ready to serve client requests 2020-09-02 17:03:57.628928 I | embed: ready to serve client requests 2020-09-02 17:03:57.632369 I | embed: serving client requests on 172.17.0.3:2379 2020-09-02 17:03:57.633371 I | embed: serving client requests on 127.0.0.1:2379 2020-09-02 17:03:59.884908 N | pkg/osutil: received terminated signal, shutting down... WARNING: 2020/09/02 17:03:59 grpc: addrConn.createTransport failed to connect to {172.17.0.3:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 172.17.0.3:2379: connect: connection refused". Reconnecting... WARNING: 2020/09/02 17:04:00 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... WARNING: 2020/09/02 17:04:00 grpc: addrConn.createTransport failed to connect to {172.17.0.3:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 172.17.0.3:2379: connect: connection refused". Reconnecting... 2020-09-02 17:04:00.982063 I | etcdserver: skipped leadership transfer for single voting member cluster ==> etcd [8239d4161115] <== raft2020/09/02 17:04:23 INFO: b273bc7741bcb020 became candidate at term 4 raft2020/09/02 17:04:23 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 4 raft2020/09/02 17:04:23 INFO: b273bc7741bcb020 became leader at term 4 raft2020/09/02 17:04:23 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 4 2020-09-02 17:04:23.486554 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2 2020-09-02 17:04:23.489105 I | embed: ready to serve client requests 2020-09-02 17:04:23.489360 I | embed: ready to serve client requests 2020-09-02 17:04:23.495282 I | embed: serving client requests on 127.0.0.1:2379 2020-09-02 17:04:23.531290 I | embed: serving client requests on 172.17.0.3:2379 2020-09-02 17:08:05.339940 W | etcdserver: failed to revoke 3020744fc6cbd44f ("etcdserver: request timed out") 2020-09-02 17:08:05.753103 W | etcdserver: request "header: lease_revoke:" with result "size:29" took too long (304.8715ms) to execute 2020-09-02 17:08:05.762669 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "error:context canceled" took too long (337.9412ms) to execute 2020-09-02 17:08:05.765281 W | etcdserver: failed to revoke 3020744fc6cbd44f ("lease not found") 2020-09-02 17:08:05.765431 W | etcdserver: read-only range request "key:\"/registry/csidrivers\" range_end:\"/registry/csidrivert\" count_only:true " with result "range_response_count:0 size:5" took too long (361.6914ms) to execute WARNING: 2020/09/02 17:08:05 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing" 2020-09-02 17:08:05.788075 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (299.8679ms) to execute 2020-09-02 17:08:05.818102 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:7" took too long (361.8341ms) to execute 2020-09-02 17:08:05.819739 W | etcdserver: read-only range request "key:\"/registry/statefulsets\" range_end:\"/registry/statefulsett\" count_only:true " with result "range_response_count:0 size:5" took too long (415.7395ms) to execute 2020-09-02 17:08:05.836514 W | etcdserver: read-only range request "key:\"/registry/minions\" range_end:\"/registry/miniont\" count_only:true " with result "range_response_count:0 size:7" took too long (320.4464ms) to execute 2020-09-02 17:08:05.843320 W | etcdserver: read-only range request "key:\"/registry/podtemplates\" range_end:\"/registry/podtemplatet\" count_only:true " with result "range_response_count:0 size:5" took too long (421.776ms) to execute 2020-09-02 17:08:05.846104 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (429.696ms) to execute 2020-09-02 17:08:05.847900 W | etcdserver: read-only range request "key:\"/registry/namespaces\" range_end:\"/registry/namespacet\" count_only:true " with result "range_response_count:0 size:7" took too long (431.6518ms) to execute 2020-09-02 17:08:06.249383 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:5" took too long (357.4744ms) to execute 2020-09-02 17:08:06.399395 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses\" range_end:\"/registry/runtimeclasset\" count_only:true " with result "range_response_count:0 size:5" took too long (461.2844ms) to execute 2020-09-02 17:08:06.483235 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (637.6256ms) to execute 2020-09-02 17:08:06.513912 W | etcdserver: read-only range request "key:\"/registry/roles\" range_end:\"/registry/rolet\" count_only:true " with result "range_response_count:0 size:7" took too long (593.7629ms) to execute 2020-09-02 17:08:06.960713 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (885.2897ms) to execute 2020-09-02 17:08:07.082764 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:5" took too long (1.0962495s) to execute 2020-09-02 17:08:09.123061 W | etcdserver: read-only range request "key:\"/registry/events\" range_end:\"/registry/eventt\" count_only:true " with result "range_response_count:0 size:7" took too long (106.9414ms) to execute 2020-09-02 17:08:09.137746 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:610" took too long (729.2231ms) to execute 2020-09-02 17:08:11.921770 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (152.2035ms) to execute 2020-09-02 17:08:12.830122 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:286" took too long (134.0629ms) to execute 2020-09-02 17:08:12.844500 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:5" took too long (143.6754ms) to execute 2020-09-02 17:08:15.589724 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:610" took too long (182.2209ms) to execute 2020-09-02 17:08:17.682971 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:482" took too long (152.8145ms) to execute 2020-09-02 17:08:17.696655 W | etcdserver: read-only range request "key:\"/registry/deployments/default/\" range_end:\"/registry/deployments/default0\" limit:500 " with result "range_response_count:1 size:2700" took too long (163.0964ms) to execute 2020-09-02 17:08:17.768233 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:16" took too long (117.246ms) to execute 2020-09-02 17:08:18.625643 W | etcdserver: read-only range request "key:\"/registry/configmaps\" range_end:\"/registry/configmapt\" count_only:true " with result "range_response_count:0 size:7" took too long (200.822ms) to execute 2020-09-02 17:08:19.763921 W | etcdserver: read-only range request "key:\"/registry/limitranges\" range_end:\"/registry/limitranget\" count_only:true " with result "range_response_count:0 size:5" took too long (120.1021ms) to execute 2020-09-02 17:08:21.926728 W | etcdserver: read-only range request "key:\"/registry/cronjobs\" range_end:\"/registry/cronjobt\" count_only:true " with result "range_response_count:0 size:5" took too long (409.7883ms) to execute 2020-09-02 17:08:23.239362 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (249.8528ms) to execute 2020-09-02 17:14:23.257824 I | mvcc: store.index: compact 737 2020-09-02 17:14:23.349709 I | mvcc: finished scheduled compaction at 737 (took 86.996ms) 2020-09-02 17:15:25.004037 W | etcdserver: read-only range request "key:\"/registry/masterleases/172.17.0.3\" " with result "range_response_count:1 size:129" took too long (140.8949ms) to execute 2020-09-02 17:19:22.932257 I | mvcc: store.index: compact 968 2020-09-02 17:19:22.990465 I | mvcc: finished scheduled compaction at 968 (took 55.8158ms) 2020-09-02 17:24:22.615998 I | mvcc: store.index: compact 1174 2020-09-02 17:24:22.661213 I | mvcc: finished scheduled compaction at 1174 (took 43.7604ms) 2020-09-02 17:29:22.316255 I | mvcc: store.index: compact 1382 2020-09-02 17:29:22.371961 I | mvcc: finished scheduled compaction at 1382 (took 54.2336ms) 2020-09-02 17:34:22.007576 I | mvcc: store.index: compact 1590 2020-09-02 17:34:22.052949 I | mvcc: finished scheduled compaction at 1590 (took 43.5823ms) 2020-09-02 17:39:21.654552 I | mvcc: store.index: compact 1798 2020-09-02 17:39:21.692731 I | mvcc: finished scheduled compaction at 1798 (took 35.9258ms) 2020-09-02 17:44:21.360959 I | mvcc: store.index: compact 2005 2020-09-02 17:44:21.425786 I | mvcc: finished scheduled compaction at 2005 (took 60.3353ms) 2020-09-02 17:49:21.078146 I | mvcc: store.index: compact 2212 2020-09-02 17:49:21.133293 I | mvcc: finished scheduled compaction at 2212 (took 51.2692ms) 2020-09-02 17:54:20.771660 I | mvcc: store.index: compact 2423 2020-09-02 17:54:20.815296 I | mvcc: finished scheduled compaction at 2423 (took 41.2112ms) ==> kernel <== 17:55:07 up 4 days, 4:22, 0 users, load average: 2.88, 1.93, 2.22 Linux minikube 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04 LTS" ==> kube-apiserver [7ac3945e14ed] <== W0902 17:04:01.507488 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.507545 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.507618 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.509966 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.510456 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.512217 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.513489 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.513634 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.514079 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.514192 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.514528 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569023 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569080 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569031 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569172 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569215 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569270 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569335 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569369 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569445 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569452 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569555 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569601 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569669 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.569757 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:01.572342 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.721575 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.767359 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.797616 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.809597 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.809858 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.864847 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.864985 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.865046 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.886421 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.888917 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.890299 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.894838 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.918926 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.923771 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.964929 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.965597 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.965628 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.965809 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.966862 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.994017 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:02.999545 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.002016 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.007183 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.016591 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.024585 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.024585 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.064923 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.065568 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.066479 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.066567 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.089334 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.108789 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.124644 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0902 17:04:03.128175 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... ==> kube-apiserver [e49f35ec797c] <== I0902 17:04:29.979833 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0902 17:04:30.041043 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0902 17:04:32.242811 1 controller.go:606] quota admission added evaluator for: endpoints I0902 17:04:35.099389 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I0902 17:07:13.577096 1 controller.go:606] quota admission added evaluator for: replicasets.apps E0902 17:08:05.632872 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} I0902 17:08:05.725816 1 trace.go:116] Trace[918131012]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.3 (started: 2020-09-02 17:07:57.8494923 +0000 UTC m=+215.742183901) (total time: 7.8653242s): Trace[918131012]: [7.8653242s] [7.68752s] END I0902 17:08:06.029877 1 trace.go:116] Trace[1070622217]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-09-02 17:07:59.9573558 +0000 UTC m=+217.850055101) (total time: 6.0265953s): Trace[1070622217]: [6.0085706s] [5.9967977s] About to write a response I0902 17:08:09.116901 1 trace.go:116] Trace[797438297]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2020-09-02 17:08:03.7128415 +0000 UTC m=+221.605533801) (total time: 5.3901968s): Trace[797438297]: [5.3901968s] [5.3901968s] END I0902 17:08:09.144868 1 trace.go:116] Trace[1312619161]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:cronjob-controller,client:172.17.0.3 (started: 2020-09-02 17:08:03.4659863 +0000 UTC m=+221.363897501) (total time: 5.6692693s): Trace[1312619161]: [205.5367ms] [205.5367ms] About to List from storage Trace[1312619161]: [5.6466651s] [5.4411284s] Listing from storage done I0902 17:08:09.959101 1 trace.go:116] Trace[1619797907]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-09-02 17:08:06.2808895 +0000 UTC m=+224.173584801) (total time: 3.6637335s): Trace[1619797907]: [3.6276354s] [3.622123s] About to write a response I0902 17:08:12.010563 1 trace.go:116] Trace[663405634]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-09-02 17:08:10.03505 +0000 UTC m=+227.927741501) (total time: 1.9977871s): Trace[663405634]: [1.6784231s] [1.5903535s] Transaction prepared Trace[663405634]: [1.977149s] [298.7259ms] Transaction committed I0902 17:08:13.116228 1 trace.go:116] Trace[551019122]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-09-02 17:08:12.1934863 +0000 UTC m=+230.119976701) (total time: 906.7336ms): Trace[551019122]: [888.2154ms] [886.3869ms] About to write a response I0902 17:08:13.845587 1 trace.go:116] Trace[1533753292]: "List etcd3" key:/masterleases/,resourceVersion:0,limit:0,continue: (started: 2020-09-02 17:08:13.2917058 +0000 UTC m=+231.218155401) (total time: 536.8158ms): Trace[1533753292]: [536.8158ms] [536.8158ms] END I0902 17:08:14.557429 1 trace.go:116] Trace[2029248371]: "Get" url:/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-09-02 17:08:13.9356317 +0000 UTC m=+231.862082001) (total time: 510.3658ms): Trace[2029248371]: [480.3156ms] [478.8831ms] About to write a response I0902 17:08:15.862669 1 trace.go:116] Trace[1788503442]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-09-02 17:08:15.1382589 +0000 UTC m=+233.064709701) (total time: 712.2357ms): Trace[1788503442]: [695.883ms] [694.1132ms] About to write a response I0902 17:08:17.403372 1 trace.go:116] Trace[1960108080]: "List etcd3" key:/masterleases/,resourceVersion:0,limit:0,continue: (started: 2020-09-02 17:08:16.859146 +0000 UTC m=+234.785602101) (total time: 534.4428ms): Trace[1960108080]: [534.4428ms] [534.4428ms] END I0902 17:08:17.835295 1 trace.go:116] Trace[436362153]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:172.17.0.3 (started: 2020-09-02 17:08:14.5868834 +0000 UTC m=+232.513332301) (total time: 3.2071856s): Trace[436362153]: [2.7830334s] [2.7830334s] About to convert to expected version Trace[436362153]: [3.2044788s] [407.3595ms] Object stored in database I0902 17:08:18.143649 1 trace.go:116] Trace[1593607177]: "List" url:/apis/apps/v1/namespaces/default/deployments,user-agent:kubectl/v1.16.6 (darwin/amd64) kubernetes/e7f962b,client:172.17.0.1 (started: 2020-09-02 17:08:17.4342229 +0000 UTC m=+235.360679701) (total time: 686.9485ms): Trace[1593607177]: [405.4213ms] [404.7345ms] Listing from storage done Trace[1593607177]: [686.9242ms] [281.5029ms] Writing http response done count:1 I0902 17:08:22.109884 1 trace.go:116] Trace[114308184]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2020-09-02 17:08:21.4932525 +0000 UTC m=+239.419700801) (total time: 611.7917ms): Trace[114308184]: [611.7917ms] [611.7917ms] END I0902 17:08:22.124070 1 trace.go:116] Trace[1415265799]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:cronjob-controller,client:172.17.0.3 (started: 2020-09-02 17:08:21.4336745 +0000 UTC m=+239.360151301) (total time: 689.4284ms): Trace[1415265799]: [676.4684ms] [632.9205ms] Listing from storage done I0902 17:08:23.443330 1 trace.go:116] Trace[2143159677]: "List etcd3" key:/cronjobs,resourceVersion:,limit:500,continue: (started: 2020-09-02 17:08:22.4693239 +0000 UTC m=+240.395772401) (total time: 964.2453ms): Trace[2143159677]: [964.2453ms] [964.2453ms] END I0902 17:08:23.454088 1 trace.go:116] Trace[984927100]: "List" url:/apis/batch/v1beta1/cronjobs,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:cronjob-controller,client:172.17.0.3 (started: 2020-09-02 17:08:22.4575961 +0000 UTC m=+240.384046001) (total time: 995.7432ms): Trace[984927100]: [986.3251ms] [978.7079ms] Listing from storage done I0902 17:08:24.402191 1 log.go:172] http: TLS handshake error from 172.17.0.3:39784: read tcp 172.17.0.3:8443->172.17.0.3:39784: read: connection reset by peer I0902 17:08:25.430344 1 trace.go:116] Trace[1786737193]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:172.17.0.3 (started: 2020-09-02 17:08:20.1529446 +0000 UTC m=+238.079394301) (total time: 5.2695005s): Trace[1786737193]: [4.3456661s] [4.3456661s] About to convert to expected version Trace[1786737193]: [5.2669696s] [909.174ms] Object stored in database I0902 17:08:27.743506 1 trace.go:116] Trace[2032440271]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:172.17.0.3 (started: 2020-09-02 17:08:26.41334 +0000 UTC m=+244.339789801) (total time: 1.3258286s): Trace[2032440271]: [876.9795ms] [876.9795ms] About to convert to expected version Trace[2032440271]: [1.3258286s] [442.206ms] END I0902 17:08:28.513981 1 trace.go:116] Trace[1198043898]: "Patch" url:/api/v1/namespaces/kube-system/events/coredns-66bff467f8-9496b.1631067ead350b78,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:172.17.0.3 (started: 2020-09-02 17:08:27.1293055 +0000 UTC m=+245.055754801) (total time: 1.3521128s): Trace[1198043898]: [997.1546ms] [997.1546ms] Recorded the audit event Trace[1198043898]: [1.3363593s] [243.1657ms] Object stored in database I0902 17:15:25.285940 1 trace.go:116] Trace[1734281338]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-09-02 17:15:24.699066 +0000 UTC m=+663.133348401) (total time: 583.8024ms): Trace[1734281338]: [327.3573ms] [327.3573ms] initial value restored Trace[1734281338]: [495.7676ms] [168.4103ms] Transaction prepared W0902 17:25:12.866477 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0902 17:43:06.521509 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted W0902 17:49:41.761985 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted ==> kube-controller-manager [34c06cf7fdee] <== I0902 17:04:33.739975 1 controllermanager.go:533] Started "endpoint" I0902 17:04:33.740093 1 endpoints_controller.go:182] Starting endpoint controller I0902 17:04:33.740534 1 shared_informer.go:223] Waiting for caches to sync for endpoint I0902 17:04:33.890320 1 controllermanager.go:533] Started "serviceaccount" I0902 17:04:33.890456 1 serviceaccounts_controller.go:117] Starting service account controller I0902 17:04:33.891529 1 shared_informer.go:223] Waiting for caches to sync for service account I0902 17:04:34.706278 1 garbagecollector.go:133] Starting garbage collector controller I0902 17:04:34.706337 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0902 17:04:34.706378 1 graph_builder.go:282] GraphBuilder running I0902 17:04:34.708269 1 controllermanager.go:533] Started "garbagecollector" E0902 17:04:34.730096 1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W0902 17:04:34.730151 1 controllermanager.go:525] Skipping "service" I0902 17:04:34.745015 1 controllermanager.go:533] Started "replicaset" W0902 17:04:34.745152 1 controllermanager.go:525] Skipping "nodeipam" I0902 17:04:34.745181 1 core.go:239] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. W0902 17:04:34.745319 1 controllermanager.go:525] Skipping "route" I0902 17:04:34.746032 1 shared_informer.go:223] Waiting for caches to sync for resource quota I0902 17:04:34.753552 1 replica_set.go:181] Starting replicaset controller I0902 17:04:34.766586 1 shared_informer.go:223] Waiting for caches to sync for ReplicaSet W0902 17:04:34.787675 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0902 17:04:34.797296 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0902 17:04:34.807640 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0902 17:04:34.839133 1 shared_informer.go:230] Caches are synced for job I0902 17:04:34.839624 1 shared_informer.go:230] Caches are synced for persistent volume I0902 17:04:34.841177 1 shared_informer.go:230] Caches are synced for HPA I0902 17:04:34.843552 1 shared_informer.go:230] Caches are synced for taint I0902 17:04:34.844105 1 taint_manager.go:187] Starting NoExecuteTaintManager I0902 17:04:34.846417 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"c9429fda-8aaf-4fd7-85d8-24170f5079b8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I0902 17:04:34.846700 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: I0902 17:04:34.847235 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0902 17:04:34.848151 1 shared_informer.go:230] Caches are synced for ReplicationController W0902 17:04:34.849526 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I0902 17:04:34.850034 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I0902 17:04:34.862092 1 shared_informer.go:230] Caches are synced for TTL I0902 17:04:34.863165 1 shared_informer.go:230] Caches are synced for attach detach I0902 17:04:34.865545 1 shared_informer.go:230] Caches are synced for PVC protection I0902 17:04:34.874743 1 shared_informer.go:230] Caches are synced for expand I0902 17:04:34.889703 1 shared_informer.go:230] Caches are synced for PV protection I0902 17:04:34.891541 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0902 17:04:34.894796 1 shared_informer.go:230] Caches are synced for GC I0902 17:04:35.087371 1 shared_informer.go:230] Caches are synced for endpoint_slice I0902 17:04:35.141654 1 shared_informer.go:230] Caches are synced for endpoint I0902 17:04:35.218743 1 shared_informer.go:230] Caches are synced for daemon sets I0902 17:04:35.222015 1 shared_informer.go:230] Caches are synced for stateful set I0902 17:04:35.244716 1 shared_informer.go:230] Caches are synced for deployment I0902 17:04:35.264339 1 shared_informer.go:230] Caches are synced for disruption I0902 17:04:35.264395 1 disruption.go:339] Sending events to api server. I0902 17:04:35.267729 1 shared_informer.go:230] Caches are synced for ReplicaSet I0902 17:04:35.336563 1 shared_informer.go:230] Caches are synced for resource quota I0902 17:04:35.346452 1 shared_informer.go:230] Caches are synced for resource quota I0902 17:04:35.385560 1 shared_informer.go:230] Caches are synced for namespace I0902 17:04:35.391855 1 shared_informer.go:230] Caches are synced for service account I0902 17:04:35.406876 1 shared_informer.go:230] Caches are synced for garbage collector I0902 17:04:35.406975 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0902 17:04:36.192080 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0902 17:04:36.192141 1 shared_informer.go:230] Caches are synced for garbage collector I0902 17:07:13.591999 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"ci-cd-deployment", UID:"ea14201c-5c21-42a2-a58a-96c7629eb662", APIVersion:"apps/v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ci-cd-deployment-9bc4bffc9 to 1 I0902 17:07:13.619425 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"ci-cd-deployment-9bc4bffc9", UID:"72f575c8-0b10-4208-8d06-05e92c38f5dd", APIVersion:"apps/v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ci-cd-deployment-9bc4bffc9-8tklv I0902 17:11:36.866650 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"ci-cd-deployment", UID:"37875e1b-1ae0-449f-9eb2-8a0926acb854", APIVersion:"apps/v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ci-cd-deployment-9bc4bffc9 to 1 I0902 17:11:36.912141 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"ci-cd-deployment-9bc4bffc9", UID:"d989e438-d738-44cf-8f8c-e705e8245d2e", APIVersion:"apps/v1", ResourceVersion:"841", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ci-cd-deployment-9bc4bffc9-tlpz8 ==> kube-controller-manager [c80666a5197d] <== I0902 17:04:05.278827 1 serving.go:313] Generated self-signed cert in-memory I0902 17:04:05.998506 1 controllermanager.go:161] Version: v1.18.3 I0902 17:04:06.001020 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0902 17:04:06.001295 1 secure_serving.go:178] Serving securely on 127.0.0.1:10257 I0902 17:04:06.001790 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0902 17:04:06.001824 1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252 I0902 17:04:06.002021 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt W0902 17:04:06.007276 1 controllermanager.go:612] fetch api resource lists failed, use legacy client builder: Get https://control-plane.minikube.internal:8443/api/v1?timeout=32s: dial tcp 172.17.0.3:8443: connect: connection refused ==> kube-proxy [0312798b5934] <== W0902 17:04:11.129585 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy E0902 17:04:11.167892 1 node.go:125] Failed to retrieve node info: Get https://control-plane.minikube.internal:8443/api/v1/nodes/minikube: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:12.334541 1 node.go:125] Failed to retrieve node info: Get https://control-plane.minikube.internal:8443/api/v1/nodes/minikube: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:14.350249 1 node.go:125] Failed to retrieve node info: Get https://control-plane.minikube.internal:8443/api/v1/nodes/minikube: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:18.489357 1 node.go:125] Failed to retrieve node info: Get https://control-plane.minikube.internal:8443/api/v1/nodes/minikube: dial tcp 172.17.0.3:8443: connect: connection refused I0902 17:04:28.073899 1 node.go:136] Successfully retrieved node IP: 172.17.0.3 I0902 17:04:28.074012 1 server_others.go:186] Using iptables Proxier. W0902 17:04:28.074797 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I0902 17:04:28.075218 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I0902 17:04:28.078540 1 server.go:583] Version: v1.18.3 I0902 17:04:28.082333 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0902 17:04:28.084738 1 config.go:315] Starting service config controller I0902 17:04:28.085683 1 config.go:133] Starting endpoints config controller I0902 17:04:28.086823 1 shared_informer.go:223] Waiting for caches to sync for service config I0902 17:04:28.086823 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I0902 17:04:28.187535 1 shared_informer.go:230] Caches are synced for endpoints config I0902 17:04:28.187614 1 shared_informer.go:230] Caches are synced for service config ==> kube-proxy [80a740a16f79] <== W0902 17:03:58.661347 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy ==> kube-scheduler [a6c2e21439dd] <== I0902 17:03:56.393207 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0902 17:03:56.393379 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0902 17:03:56.669328 1 serving.go:313] Generated self-signed cert in-memory W0902 17:03:57.267564 1 authentication.go:297] Error looking up in-cluster authentication configuration: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.17.0.3:8443: connect: connection refused W0902 17:03:57.267626 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0902 17:03:57.267647 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0902 17:03:57.281032 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0902 17:03:57.281086 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0902 17:03:57.285608 1 authorization.go:47] Authorization is disabled W0902 17:03:57.285643 1 authentication.go:40] Authentication is disabled I0902 17:03:57.285674 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0902 17:03:57.289496 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0902 17:03:57.289603 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0902 17:03:57.290952 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0902 17:03:57.291036 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0902 17:03:57.294616 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:03:57.294820 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:03:57.294622 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:03:57.295356 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:03:57.295561 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:03:57.296022 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:03:57.296306 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:03:57.296762 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:03:57.297361 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused ==> kube-scheduler [abab37169625] <== I0902 17:04:11.164939 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0902 17:04:11.165177 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0902 17:04:11.894829 1 serving.go:313] Generated self-signed cert in-memory W0902 17:04:12.418680 1 authentication.go:297] Error looking up in-cluster authentication configuration: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.17.0.3:8443: connect: connection refused W0902 17:04:12.418819 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0902 17:04:12.430285 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0902 17:04:12.445426 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0902 17:04:12.445508 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0902 17:04:12.449183 1 authorization.go:47] Authorization is disabled W0902 17:04:12.449214 1 authentication.go:40] Authentication is disabled I0902 17:04:12.449237 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0902 17:04:12.451251 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0902 17:04:12.451341 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0902 17:04:12.452470 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0902 17:04:12.453864 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0902 17:04:12.454187 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:12.454452 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:12.454538 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:12.455064 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:12.455541 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:12.455858 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:12.456552 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:12.456552 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:12.457147 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:13.337543 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:13.498276 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:13.510671 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:13.647816 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:13.668281 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:13.679553 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:13.747732 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:14.043439 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:14.053569 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:15.280730 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:15.706622 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:15.747883 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:15.941157 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:16.239592 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:16.259261 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:16.555760 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:16.590003 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:17.006743 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:19.306032 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:19.314516 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:19.718958 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:20.239406 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:20.532016 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:20.658102 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:21.285798 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:21.848806 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:22.369279 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 172.17.0.3:8443: connect: connection refused E0902 17:04:27.897802 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0902 17:04:27.898090 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope I0902 17:04:33.753015 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0902 17:08:25.547612 1 log.go:172] http: TLS handshake error from 127.0.0.1:53224: EOF ==> kubelet <== -- Logs begin at Wed 2020-09-02 17:02:23 UTC, end at Wed 2020-09-02 17:55:10 UTC. -- Sep 02 17:04:28 minikube kubelet[5467]: I0902 17:04:28.013999 5467 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9735ed8f-4919-4234-afe6-3e4feeabce29-config-volume") pod "coredns-66bff467f8-9496b" (UID: "9735ed8f-4919-4234-afe6-3e4feeabce29") Sep 02 17:04:28 minikube kubelet[5467]: I0902 17:04:28.014049 5467 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-cq6vn" (UniqueName: "kubernetes.io/secret/9735ed8f-4919-4234-afe6-3e4feeabce29-coredns-token-cq6vn") pod "coredns-66bff467f8-9496b" (UID: "9735ed8f-4919-4234-afe6-3e4feeabce29") Sep 02 17:04:28 minikube kubelet[5467]: I0902 17:04:28.014106 5467 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-vczc4" (UniqueName: "kubernetes.io/secret/5bf96697-d684-4732-93b9-f6a9a4e7c2e1-storage-provisioner-token-vczc4") pod "storage-provisioner" (UID: "5bf96697-d684-4732-93b9-f6a9a4e7c2e1") Sep 02 17:04:28 minikube kubelet[5467]: I0902 17:04:28.014153 5467 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/7257247f-6486-4751-864e-41e96e77f17e-xtables-lock") pod "kube-proxy-dgxwp" (UID: "7257247f-6486-4751-864e-41e96e77f17e") Sep 02 17:04:28 minikube kubelet[5467]: I0902 17:04:28.014193 5467 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-g8ff6" (UniqueName: "kubernetes.io/secret/7257247f-6486-4751-864e-41e96e77f17e-kube-proxy-token-g8ff6") pod "kube-proxy-dgxwp" (UID: "7257247f-6486-4751-864e-41e96e77f17e") Sep 02 17:04:28 minikube kubelet[5467]: I0902 17:04:28.014234 5467 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/5bf96697-d684-4732-93b9-f6a9a4e7c2e1-tmp") pod "storage-provisioner" (UID: "5bf96697-d684-4732-93b9-f6a9a4e7c2e1") Sep 02 17:04:28 minikube kubelet[5467]: I0902 17:04:28.014274 5467 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/7257247f-6486-4751-864e-41e96e77f17e-kube-proxy") pod "kube-proxy-dgxwp" (UID: "7257247f-6486-4751-864e-41e96e77f17e") Sep 02 17:04:28 minikube kubelet[5467]: I0902 17:04:28.014297 5467 reconciler.go:157] Reconciler: start to sync state Sep 02 17:04:28 minikube kubelet[5467]: I0902 17:04:28.146723 5467 kubelet_node_status.go:112] Node minikube was previously registered Sep 02 17:04:28 minikube kubelet[5467]: I0902 17:04:28.146981 5467 kubelet_node_status.go:73] Successfully registered node minikube Sep 02 17:04:28 minikube kubelet[5467]: W0902 17:04:28.966079 5467 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-9496b through plugin: invalid network status for Sep 02 17:04:28 minikube kubelet[5467]: W0902 17:04:28.988337 5467 pod_container_deletor.go:77] Container "6ac2c751f67d220499522c78ffab220ffd65da82919af515ccbaa8bec3f557c5" not found in pod's containers Sep 02 17:04:29 minikube kubelet[5467]: E0902 17:04:29.164560 5467 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-vczc4: failed to sync secret cache: timed out waiting for the condition Sep 02 17:04:29 minikube kubelet[5467]: E0902 17:04:29.167608 5467 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/5bf96697-d684-4732-93b9-f6a9a4e7c2e1-storage-provisioner-token-vczc4 podName:5bf96697-d684-4732-93b9-f6a9a4e7c2e1 nodeName:}" failed. No retries permitted until 2020-09-02 17:04:29.6663842 +0000 UTC m=+11.430285801 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-vczc4\" (UniqueName: \"kubernetes.io/secret/5bf96697-d684-4732-93b9-f6a9a4e7c2e1-storage-provisioner-token-vczc4\") pod \"storage-provisioner\" (UID: \"5bf96697-d684-4732-93b9-f6a9a4e7c2e1\") : failed to sync secret cache: timed out waiting for the condition" Sep 02 17:04:30 minikube kubelet[5467]: W0902 17:04:30.019813 5467 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-9496b through plugin: invalid network status for Sep 02 17:04:30 minikube kubelet[5467]: I0902 17:04:30.071207 5467 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 45a557b038dd680dfd1a6379551d0562139f28ccf05d0aa039db26cd962dbd88 Sep 02 17:07:13 minikube kubelet[5467]: I0902 17:07:13.728965 5467 topology_manager.go:233] [topologymanager] Topology Admit Handler Sep 02 17:07:13 minikube kubelet[5467]: I0902 17:07:13.921659 5467 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-4xqjm" (UniqueName: "kubernetes.io/secret/1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6-default-token-4xqjm") pod "ci-cd-deployment-9bc4bffc9-8tklv" (UID: "1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6") Sep 02 17:07:20 minikube kubelet[5467]: W0902 17:07:20.186396 5467 pod_container_deletor.go:77] Container "c69fcb715dbb929ff8ae1871bef2d7accc4a3b58a369487d0b8be06aa26a70dd" not found in pod's containers Sep 02 17:07:20 minikube kubelet[5467]: W0902 17:07:20.196325 5467 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/ci-cd-deployment-9bc4bffc9-8tklv through plugin: invalid network status for Sep 02 17:07:20 minikube kubelet[5467]: E0902 17:07:20.256112 5467 remote_image.go:113] PullImage "nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:07:20 minikube kubelet[5467]: E0902 17:07:20.256361 5467 kuberuntime_image.go:50] Pull image "nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT" failed: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:07:20 minikube kubelet[5467]: E0902 17:07:20.256889 5467 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:07:20 minikube kubelet[5467]: E0902 17:07:20.257706 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:07:21 minikube kubelet[5467]: W0902 17:07:21.207031 5467 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/ci-cd-deployment-9bc4bffc9-8tklv through plugin: invalid network status for Sep 02 17:07:21 minikube kubelet[5467]: E0902 17:07:21.216143 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ImagePullBackOff: "Back-off pulling image \"nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT\"" Sep 02 17:07:22 minikube kubelet[5467]: E0902 17:07:22.235709 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ImagePullBackOff: "Back-off pulling image \"nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT\"" Sep 02 17:07:37 minikube kubelet[5467]: E0902 17:07:37.039405 5467 remote_image.go:113] PullImage "nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:07:37 minikube kubelet[5467]: E0902 17:07:37.040300 5467 kuberuntime_image.go:50] Pull image "nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT" failed: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:07:37 minikube kubelet[5467]: E0902 17:07:37.040718 5467 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:07:37 minikube kubelet[5467]: E0902 17:07:37.040968 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:07:49 minikube kubelet[5467]: E0902 17:07:49.947018 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ImagePullBackOff: "Back-off pulling image \"nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT\"" Sep 02 17:08:25 minikube kubelet[5467]: E0902 17:08:25.560366 5467 controller.go:178] failed to update node lease, error: Put https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Sep 02 17:08:27 minikube kubelet[5467]: I0902 17:08:27.053224 5467 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 45a557b038dd680dfd1a6379551d0562139f28ccf05d0aa039db26cd962dbd88 Sep 02 17:08:29 minikube kubelet[5467]: E0902 17:08:29.019668 5467 controller.go:178] failed to update node lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "minikube": the object has been modified; please apply your changes to the latest version and try again Sep 02 17:08:30 minikube kubelet[5467]: I0902 17:08:30.404429 5467 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6959df28cb96cc14e20023026a6058f832b12460441e88726a6dc4d77739946d Sep 02 17:08:30 minikube kubelet[5467]: I0902 17:08:30.406261 5467 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 45a557b038dd680dfd1a6379551d0562139f28ccf05d0aa039db26cd962dbd88 Sep 02 17:08:30 minikube kubelet[5467]: E0902 17:08:30.460323 5467 remote_runtime.go:295] ContainerStatus "45a557b038dd680dfd1a6379551d0562139f28ccf05d0aa039db26cd962dbd88" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 45a557b038dd680dfd1a6379551d0562139f28ccf05d0aa039db26cd962dbd88 Sep 02 17:08:30 minikube kubelet[5467]: E0902 17:08:30.669241 5467 remote_image.go:113] PullImage "nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:08:30 minikube kubelet[5467]: E0902 17:08:30.671215 5467 kuberuntime_image.go:50] Pull image "nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT" failed: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:08:30 minikube kubelet[5467]: E0902 17:08:30.672936 5467 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:08:30 minikube kubelet[5467]: E0902 17:08:30.674156 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:08:41 minikube kubelet[5467]: E0902 17:08:41.893064 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ImagePullBackOff: "Back-off pulling image \"nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT\"" Sep 02 17:08:53 minikube kubelet[5467]: E0902 17:08:53.947492 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ImagePullBackOff: "Back-off pulling image \"nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT\"" Sep 02 17:09:07 minikube kubelet[5467]: E0902 17:09:07.964580 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ImagePullBackOff: "Back-off pulling image \"nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT\"" Sep 02 17:09:19 minikube kubelet[5467]: E0902 17:09:19.917581 5467 remote_image.go:113] PullImage "nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:09:19 minikube kubelet[5467]: E0902 17:09:19.917691 5467 kuberuntime_image.go:50] Pull image "nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT" failed: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:09:19 minikube kubelet[5467]: E0902 17:09:19.918023 5467 kuberuntime_manager.go:801] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials Sep 02 17:09:19 minikube kubelet[5467]: E0902 17:09:19.919084 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get http://nexus3:8083/v2/com.panov/ci-cd/manifests/0.0.1-SNAPSHOT: no basic auth credentials" Sep 02 17:09:34 minikube kubelet[5467]: E0902 17:09:34.843101 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ImagePullBackOff: "Back-off pulling image \"nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT\"" Sep 02 17:09:46 minikube kubelet[5467]: E0902 17:09:46.798481 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ImagePullBackOff: "Back-off pulling image \"nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT\"" Sep 02 17:10:00 minikube kubelet[5467]: E0902 17:10:00.797876 5467 pod_workers.go:191] Error syncing pod 1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6 ("ci-cd-deployment-9bc4bffc9-8tklv_default(1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6)"), skipping: failed to "StartContainer" for "ci-cd" with ImagePullBackOff: "Back-off pulling image \"nexus3:8083/com.panov/ci-cd:0.0.1-SNAPSHOT\"" Sep 02 17:10:09 minikube kubelet[5467]: I0902 17:10:09.720818 5467 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-4xqjm" (UniqueName: "kubernetes.io/secret/1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6-default-token-4xqjm") pod "1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6" (UID: "1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6") Sep 02 17:10:09 minikube kubelet[5467]: I0902 17:10:09.750707 5467 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6-default-token-4xqjm" (OuterVolumeSpecName: "default-token-4xqjm") pod "1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6" (UID: "1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6"). InnerVolumeSpecName "default-token-4xqjm". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 02 17:10:09 minikube kubelet[5467]: I0902 17:10:09.821346 5467 reconciler.go:319] Volume detached for volume "default-token-4xqjm" (UniqueName: "kubernetes.io/secret/1c96a03e-aee3-4b3a-b151-e3e7e09ca3f6-default-token-4xqjm") on node "minikube" DevicePath "" Sep 02 17:11:37 minikube kubelet[5467]: I0902 17:11:37.006655 5467 topology_manager.go:233] [topologymanager] Topology Admit Handler Sep 02 17:11:37 minikube kubelet[5467]: I0902 17:11:37.080509 5467 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-4xqjm" (UniqueName: "kubernetes.io/secret/44ebef42-274a-4143-8548-a7e3df864719-default-token-4xqjm") pod "ci-cd-deployment-9bc4bffc9-tlpz8" (UID: "44ebef42-274a-4143-8548-a7e3df864719") Sep 02 17:11:38 minikube kubelet[5467]: W0902 17:11:38.229161 5467 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/ci-cd-deployment-9bc4bffc9-tlpz8 through plugin: invalid network status for Sep 02 17:11:38 minikube kubelet[5467]: W0902 17:11:38.239697 5467 pod_container_deletor.go:77] Container "ad3f9329ccc67ffe77540bba5756dcf7bb35df6c5a9ad7c9fee0abf64f91a6e1" not found in pod's containers Sep 02 17:11:39 minikube kubelet[5467]: W0902 17:11:39.270744 5467 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/ci-cd-deployment-9bc4bffc9-tlpz8 through plugin: invalid network status for ==> storage-provisioner [332f8cce1a9a] <== I0902 17:08:31.536092 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0902 17:08:48.999252 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0902 17:08:48.999174 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"492cc539-cd22-471d-8e95-4888aa446a8a", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_94656b2a-634f-479c-83ac-6fe4d67c69f2 became leader I0902 17:08:49.003656 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_94656b2a-634f-479c-83ac-6fe4d67c69f2! I0902 17:08:49.107510 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_94656b2a-634f-479c-83ac-6fe4d67c69f2! ==> storage-provisioner [6959df28cb96] <== I0902 17:04:31.275652 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0902 17:04:48.872098 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0902 17:04:48.878118 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_bc24e99b-5375-4824-8411-539520eb4d60! I0902 17:04:48.879781 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"492cc539-cd22-471d-8e95-4888aa446a8a", APIVersion:"v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_bc24e99b-5375-4824-8411-539520eb4d60 became leader I0902 17:04:48.982843 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_bc24e99b-5375-4824-8411-539520eb4d60! I0902 17:08:05.018381 1 leaderelection.go:288] failed to renew lease kube-system/k8s.io-minikube-hostpath: failed to tryAcquireOrRenew context deadline exceeded I0902 17:08:05.253110 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"492cc539-cd22-471d-8e95-4888aa446a8a", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_bc24e99b-5375-4824-8411-539520eb4d60 stopped leading F0902 17:08:05.357664 1 controller.go:877] leaderelection lost
RA489 commented 4 years ago

/triage support

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

AndrewPanov commented 3 years ago

/remove-lifecycle stale

priyawadhwa commented 3 years ago

Hey @AndrewPanov apologies for the delayed response here. Are you still seeing this issue with the most recent version of minikube v1.17.0?

AndrewPanov commented 3 years ago

@priyawadhwa I may try to reproduce it with the most recent version and give you feedback, as for now i don't know

AndrewPanov commented 3 years ago

@priyawadhwa I have succeeded to reproduce the same bug and behavior on minikube 1.17.1

Karmavil commented 3 years ago

@AndrewPanov I'm trying to improve the description of an issue I opened yesterday (I'm reading documentation right now). I'm not sure if it is related with your problem, but maybe it is. You're getting this particular message: no basic auth credentials So in my limited experience I see 2 options here.. your credentials are wrong or your repository address.

Let me know if it is related to link or mention this issue. Thanks

priyawadhwa commented 3 years ago

Hey @AndrewPanov have you tried using our registry-creds addon? Documentation for it can be found here and might help fix your auth failure: https://minikube.sigs.k8s.io/docs/handbook/registry/#using-a-private-registry

Karmavil commented 3 years ago

@AndrewPanov Sorry, I'm NOT trying to say that you made a mistake with your credentials, I'm trying to say that maybe the auth is the problem. Let me explain:

@priyawadhwa is there a standard for repository endpoints? (so to speak) Because as you can see in k8s doc: username, pasword, email, auth.. all is there and it's not clear which params should I use As you can see in my last reply I've been having some troubles with the k8s documentation pulling from a private docker hub repo.

I followed your suggestion and this is my conclusion: minikube is saving my .dockerconfigjson with a structure similar to what Andrew described in this issue.. (apparently he's able to login with that structure to whatever nexus3 is)

this is the dpr-secret that minikube made for me: {"auths":{"https://index.docker.io/v1/":{"auth":"bXluYW1lOm15cGFzc3dvcmQK","email":"none"}}} let me expand it:

{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth":"myname:mypassword",
            "email":"none"
        }
    }
}

That doesn't work for docker hub. And yes I told my deployments to use the imagePullSecrets (BTW the minikube documentation should mention if that's necessary, and how to delete or override a previous configuration 'cos I tried re-configure or reset choosing n for all options and the previous configuration is still here... I had to disable the addon).

However if I simply set a Secret, set imagePullSecrets, and use the following structure it works perfectly fine:

{
    "auths": {
        "https://index.docker.io/v1/": {
            "username":"myname",
            "password":"mypassword"
        }
    }
}
data:
    .dockerconfigjson: e2F1dGhzOiB7aHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvOiB1c2VybmFtZTpteW5hbWV9fSBwYXNzd29yZDpteXBhc3N3b3JkfX0K

You get banned after too many failed attempts so that's pretty much all that I can get for you.. except for this last example:

{
    "auths": {
        "https://index.docker.io/v1/": {
            "username":"myname",
            "password":"mypassword"
        },
        "email": "none"
    }
}

this is the error: json: cannot unmarshal string into Go struct field DockerConfigJSON.auths of type credentialprovider.dockerConfigEntryWithAuth I was curious and that's why I asked about standard procedures to login using token vs username/password vs email/password.. If they have a type credentialprovider.dockerConfigEntryWithAuth why I cannot login using auth? anyway I hope this information not strictly related with this issue help someone to deal with this problems

Karmavil commented 3 years ago

EDIT: here I am again with a new edit.. As I said before in this reply, the format below works. (userID:token or userID:password) (I fixed this answer previously because I thought it worked by mistake, but instead I made the mistake right there actually .. I forgot to include the imagePullSecret

# config.json
{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth":"bXluYW1lOm15cGFzc3dvcmQK"
        }
    }
}
# secret
apiVersion: v1
data:
  .dockerconfigjson: ewogICAgImF1dGhzIjogewogICAgICAgICJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7CiAgICAgICAgICAgICJhdXRoIjoiYlhsdVlXMWxPbTE1Y0dGemMzZHZjbVFLIgogICAgICAgIH0KICAgIH0KfQ==
kind: Secret
metadata:
  name: my-credential
  namespace: default
type: kubernetes.io/dockerconfigjson

# your por, deployment, or statefulset
...
  containers:
    - image: dos:latest
       name: dbg
        ...
  imagePullSecrets:
    - name: my-credential
AndrewPanov commented 3 years ago

@priyawadhwa ok, i'll try to use that and see that happens @Karmavil tl;dr sorry 1) everything is ok with creds or address 2) all creds are stored like in 13 step (auths->url->auth->base64encoded) 3) mbe imagepullsecrets may help for somehow, i didn't check it

Karmavil commented 3 years ago

I know sorry about that. It was my fault from the moment I posted code that no one asked for, on an issue I didn't open. But I have issues downloading from a private repository like docker with minikube as priyadhwa suggested and I don't want to duplicate and Issue that I can easily workaround. I don't need that functionality to be honest, but it's really useful to keep your credentials out of repositories. I will keep an eye on this issue (as a user of course). Feedback is welcome