kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.48k stars 4.89k forks source link

Gcp-auth addon seems to be overriding manually specified `GOOGLE_APPLICATION_CREDENTIALS` env var #11563

Closed matthewmichihara closed 3 years ago

matthewmichihara commented 3 years ago

Context: https://github.com/GoogleCloudPlatform/cloud-code-intellij/issues/2940

minikube 1.20.0 gcloud 343.0.0 Skaffold 1.25.0 macOS 11.4

Steps to reproduce the issue:

  1. Set up a basic hello world Cloud Run project that accesses a Secret Manager secret as described in https://github.com/GoogleCloudPlatform/cloud-code-intellij/issues/2940. Instead of using Cloud Code, you can just clone https://github.com/GoogleCloudPlatform/cloud-code-samples/tree/master/python/cloud-run-python-hello-world directly and make the same changes to the code to access a secret.
  2. In that project directory, use gcloud alpha code export --service-account <some service account that has the Secret Manager Secret Accessor role on the secret> to create Kubernetes manifests (pods_and_services.yaml) and a Skaffold configuration (skaffold.yaml).

pods_and_services.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    service: cloud-run-secrets
  name: cloud-run-secrets
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cloud-run-secrets
  template:
    metadata:
      labels:
        app: cloud-run-secrets
    spec:
      containers:
      - env:
        - name: PORT
          value: '8080'
        - name: K_CONFIGURATION
          value: dev
        - name: K_REVISION
          value: dev-0001
        - name: K_SERVICE
          value: cloud-run-secrets
        - name: GOOGLE_APPLICATION_CREDENTIALS
          value: /etc/local_development_credential/local_development_service_account.json
        image: gcr.io/redmond-211121/cloud-run-secrets
        name: cloud-run-secrets-container
        ports:
        - containerPort: 8080
        volumeMounts:
        - mountPath: /etc/local_development_credential
          name: local-development-credential
          readOnly: true
      terminationGracePeriodSeconds: 0
      volumes:
      - name: local-development-credential
        secret:
          secretName: local-development-credential
---
apiVersion: v1
kind: Service
metadata:
  name: cloud-run-secrets
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: cloud-run-secrets
  type: LoadBalancer
---
apiVersion: v1
data:
  local_development_service_account.json: ewogIC...
kind: Secret
metadata:
  name: local-development-credential
type: Opaque

skaffold.yaml:

apiVersion: skaffold/v2beta5
build:
  artifacts:
  - context: /Users/michihara/Code/cloud-run-secrets
    docker:
      dockerfile: Dockerfile
    image: gcr.io/redmond-211121/cloud-run-secrets
deploy:
  kubectl:
    manifests:
    - pods_and_services.yaml
kind: Config
  1. Note that in the generated Kubernetes manifest above, gcloud code export is setting the GOOGLE_APPLICATION_CREDENTIALS environment variable to that of the passed in service account.
  2. minikube start
  3. minikube addons enable gcp-auth
  4. skaffold dev --port-forward=services.
  5. Navigate to http://localhost:8080.

Expected This should display a webpage that displays the secret value.

Actual The webpage shows an error, with this in the logs:

[cloud-run-secrets-container] [2021-06-02 15:49:06,963] ERROR in app: Exception on / [GET]
[cloud-run-secrets-container] Traceback (most recent call last):
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 67, in error_remapped_callable
[cloud-run-secrets-container]     return callable_(*args, **kwargs)
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/grpc/_channel.py", line 946, in __call__
[cloud-run-secrets-container]     return _end_unary_response_blocking(state, call, False, None)
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
[cloud-run-secrets-container]     raise _InactiveRpcError(state)
[cloud-run-secrets-container] grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
[cloud-run-secrets-container]   status = StatusCode.PERMISSION_DENIED
[cloud-run-secrets-container]   details = "Permission 'secretmanager.versions.access' denied for resource 'projects/redmond-211121/secrets/june-2/versions/1' (or it may not exist)."
[cloud-run-secrets-container]   debug_error_string = "{"created":"@1622648946.962648298","description":"Error received from peer ipv4:172.217.7.10:443","file":"src/core/lib/surface/call.cc","file_line":1066,"grpc_message":"Permission 'secretmanager.versions.access' denied for resource 'projects/redmond-211121/secrets/june-2/versions/1' (or it may not exist).","grpc_status":7}"
[cloud-run-secrets-container] >
[cloud-run-secrets-container]
[cloud-run-secrets-container] The above exception was the direct cause of the following exception:
[cloud-run-secrets-container]
[cloud-run-secrets-container] Traceback (most recent call last):
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2051, in wsgi_app
[cloud-run-secrets-container]     response = self.full_dispatch_request()
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1501, in full_dispatch_request
[cloud-run-secrets-container]     rv = self.handle_user_exception(e)
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1499, in full_dispatch_request
[cloud-run-secrets-container]     rv = self.dispatch_request()
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1485, in dispatch_request
[cloud-run-secrets-container]     return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
[cloud-run-secrets-container]   File "app.py", line 26, in hello
[cloud-run-secrets-container]     response = client.access_secret_version(
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/google/cloud/secretmanager_v1/services/secret_manager_service/client.py", line 1155, in access_secret_version
[cloud-run-secrets-container]     response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
[cloud-run-secrets-container]     return wrapped_func(*args, **kwargs)
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/google/api_core/retry.py", line 285, in retry_wrapped_func
[cloud-run-secrets-container]     return retry_target(
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/google/api_core/retry.py", line 188, in retry_target
[cloud-run-secrets-container]     return target()
[cloud-run-secrets-container]   File "/usr/local/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 69, in error_remapped_callable
[cloud-run-secrets-container]     six.raise_from(exceptions.from_grpc_error(exc), exc)
[cloud-run-secrets-container]   File "<string>", line 3, in raise_from
[cloud-run-secrets-container] google.api_core.exceptions.PermissionDenied: 403 Permission 'secretmanager.versions.access' denied for resource 'projects/redmond-211121/secrets/june-2/versions/1' (or it may not exist).

If I disable the gcp-auth addon, everything works as expected. It seems that perhaps the gcp-auth addon is override the credential set in the Kubernetes manifest with the credentials from my local machine. I think that if I have GOOGLE_APPLICATION_CREDENTIALS specified in the pod, it shouldn't be overridden.

Full output of minikube logs command:

``` * * ==> Audit <== * |------------|--------------------------------|----------|-----------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |------------|--------------------------------|----------|-----------|---------|-------------------------------|-------------------------------| | start | | minikube | michihara | v1.20.0 | Wed, 02 Jun 2021 11:36:42 EDT | Wed, 02 Jun 2021 11:39:10 EDT | | addons | enable gcp-auth | minikube | michihara | v1.20.0 | Wed, 02 Jun 2021 11:39:33 EDT | Wed, 02 Jun 2021 11:39:54 EDT | | docker-env | --shell none -p minikube | minikube | skaffold | v1.20.0 | Wed, 02 Jun 2021 11:40:25 EDT | Wed, 02 Jun 2021 11:40:26 EDT | | | --user=skaffold | | | | | | | docker-env | --shell none -p minikube | minikube | skaffold | v1.20.0 | Wed, 02 Jun 2021 11:42:41 EDT | Wed, 02 Jun 2021 11:42:43 EDT | | | --user=skaffold | | | | | | | addons | list | minikube | michihara | v1.20.0 | Wed, 02 Jun 2021 11:44:27 EDT | Wed, 02 Jun 2021 11:44:28 EDT | | addons | disable gcp-auth | minikube | michihara | v1.20.0 | Wed, 02 Jun 2021 11:44:32 EDT | Wed, 02 Jun 2021 11:44:39 EDT | | docker-env | --shell none -p minikube | minikube | skaffold | v1.20.0 | Wed, 02 Jun 2021 11:44:54 EDT | Wed, 02 Jun 2021 11:44:56 EDT | | | --user=skaffold | | | | | | | docker-env | --shell none -p minikube | minikube | skaffold | v1.20.0 | Wed, 02 Jun 2021 11:45:37 EDT | Wed, 02 Jun 2021 11:45:38 EDT | | | --user=skaffold | | | | | | | addons | enable gcp-auth | minikube | michihara | v1.20.0 | Wed, 02 Jun 2021 11:48:06 EDT | Wed, 02 Jun 2021 11:48:21 EDT | | docker-env | --shell none -p minikube | minikube | skaffold | v1.20.0 | Wed, 02 Jun 2021 11:48:55 EDT | Wed, 02 Jun 2021 11:48:56 EDT | | | --user=skaffold | | | | | | | addons | enable gcp-auth | minikube | michihara | v1.20.0 | Wed, 02 Jun 2021 11:59:08 EDT | Wed, 02 Jun 2021 11:59:19 EDT | | docker-env | --shell none -p minikube | minikube | skaffold | v1.20.0 | Wed, 02 Jun 2021 12:00:17 EDT | Wed, 02 Jun 2021 12:00:18 EDT | | | --user=skaffold | | | | | | | docker-env | --shell none -p minikube | minikube | skaffold | v1.20.0 | Wed, 02 Jun 2021 12:00:22 EDT | Wed, 02 Jun 2021 12:00:23 EDT | | | --user=skaffold | | | | | | |------------|--------------------------------|----------|-----------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2021/06/02 11:36:42 Running on machine: michihara-macbookpro Binary: Built with gc go1.16.3 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0602 11:36:42.770810 44922 out.go:291] Setting OutFile to fd 1 ... I0602 11:36:42.771341 44922 out.go:343] isatty.IsTerminal(1) = true I0602 11:36:42.771346 44922 out.go:304] Setting ErrFile to fd 2... I0602 11:36:42.771353 44922 out.go:343] isatty.IsTerminal(2) = true I0602 11:36:42.771518 44922 root.go:316] Updating PATH: /Users/michihara/.minikube/bin W0602 11:36:42.771644 44922 root.go:291] Error reading config file at /Users/michihara/.minikube/config/config.json: open /Users/michihara/.minikube/config/config.json: no such file or directory I0602 11:36:42.772171 44922 out.go:298] Setting JSON to false I0602 11:36:42.844317 44922 start.go:108] hostinfo: {"hostname":"michihara-macbookpro.roam.corp.google.com","uptime":607466,"bootTime":1622040736,"procs":568,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.4","kernelVersion":"20.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"52a1e876-863e-38e3-ac80-09bbab13b752"} W0602 11:36:42.844410 44922 start.go:116] gopshost.Virtualization returned error: not implemented yet I0602 11:36:42.865269 44922 out.go:170] 😄 minikube v1.20.0 on Darwin 11.4 I0602 11:36:42.866141 44922 notify.go:169] Checking for updates... I0602 11:36:42.866628 44922 driver.go:322] Setting default libvirt URI to qemu:///system I0602 11:36:42.866857 44922 global.go:103] Querying for installed drivers using PATH=/Users/michihara/.minikube/bin:/Users/michihara/.nvm/versions/node/v10.23.1/bin:/Users/michihara/.pyenv/shims:/Users/michihara/bin:/Users/michihara/go/bin:/Users/michihara/brew/bin:/Users/michihara/brew/sbin:/Users/michihara/Code/google-cloud-sdk/bin:/Users/michihara/bin:/usr/local/git/current/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin I0602 11:36:42.866871 44922 global.go:111] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0602 11:36:42.867067 44922 global.go:111] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/} I0602 11:36:42.867796 44922 global.go:111] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/} I0602 11:36:42.867815 44922 global.go:111] vmwarefusion default: false priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/} I0602 11:36:43.304222 44922 docker.go:119] docker version: linux-20.10.6 I0602 11:36:43.306038 44922 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0602 11:36:44.103975 44922 info.go:261] docker info: {ID:D2Y7:UIYU:FNDY:NABF:N7FN:AHMP:4VP5:Z23V:3DSD:YH3N:TC62:7NSB Containers:148 ContainersRunning:32 ContainersPaused:0 ContainersStopped:116 Images:231 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:152 OomKillDisable:true NGoroutines:240 SystemTime:2021-06-02 15:36:43.48887585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:4127531008 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID:4yrklusz911j37h3pr9h8k5xa NodeAddr:192.168.65.6 LocalNodeState:active ControlAvailable:true Error: RemoteManagers:[map[Addr:192.168.65.6:2377 NodeID:4yrklusz911j37h3pr9h8k5xa]]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}} I0602 11:36:44.104349 44922 global.go:111] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0602 11:36:44.176962 44922 global.go:111] hyperkit default: true priority: 8, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc:} I0602 11:36:44.177151 44922 global.go:111] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/} I0602 11:36:44.177236 44922 global.go:111] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/} I0602 11:36:44.177278 44922 driver.go:258] not recommending "ssh" due to default: false I0602 11:36:44.177298 44922 driver.go:292] Picked: docker I0602 11:36:44.177309 44922 driver.go:293] Alternatives: [hyperkit ssh] I0602 11:36:44.177312 44922 driver.go:294] Rejects: [virtualbox vmware vmwarefusion parallels podman] I0602 11:36:44.200083 44922 out.go:170] ✨ Automatically selected the docker driver. Other choices: hyperkit, ssh I0602 11:36:44.200143 44922 start.go:276] selected driver: docker I0602 11:36:44.200165 44922 start.go:718] validating driver "docker" against I0602 11:36:44.200196 44922 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0602 11:36:44.201027 44922 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0602 11:36:44.487871 44922 info.go:261] docker info: {ID:D2Y7:UIYU:FNDY:NABF:N7FN:AHMP:4VP5:Z23V:3DSD:YH3N:TC62:7NSB Containers:148 ContainersRunning:32 ContainersPaused:0 ContainersStopped:116 Images:231 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:152 OomKillDisable:true NGoroutines:240 SystemTime:2021-06-02 15:36:44.402268544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:4127531008 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID:4yrklusz911j37h3pr9h8k5xa NodeAddr:192.168.65.6 LocalNodeState:active ControlAvailable:true Error: RemoteManagers:[map[Addr:192.168.65.6:2377 NodeID:4yrklusz911j37h3pr9h8k5xa]]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}} I0602 11:36:44.487997 44922 start_flags.go:259] no existing cluster config was found, will generate one from the flags I0602 11:36:44.488525 44922 start_flags.go:314] Using suggested 3888MB memory alloc based on sys=16384MB, container=3936MB I0602 11:36:44.488669 44922 start_flags.go:715] Wait components to verify : map[apiserver:true system_pods:true] I0602 11:36:44.488683 44922 cni.go:93] Creating CNI manager for "" I0602 11:36:44.488692 44922 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0602 11:36:44.488697 44922 start_flags.go:273] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3888 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0602 11:36:44.530272 44922 out.go:170] 👍 Starting control plane node minikube in cluster minikube I0602 11:36:44.530653 44922 cache.go:111] Beginning downloading kic base image for docker with docker W0602 11:36:44.530882 44922 out.go:424] no arguments passed for "🚜 Pulling base image ...\n" - returning raw string W0602 11:36:44.530900 44922 out.go:424] no arguments passed for "🚜 Pulling base image ...\n" - returning raw string I0602 11:36:44.550515 44922 out.go:170] 🚜 Pulling base image ... I0602 11:36:44.550986 44922 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0602 11:36:44.551585 44922 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory I0602 11:36:44.551635 44922 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e to local cache I0602 11:36:44.570369 44922 image.go:192] Writing gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e to local cache I0602 11:36:44.627374 44922 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0602 11:36:44.627419 44922 cache.go:54] Caching tarball of preloaded images I0602 11:36:44.627482 44922 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0602 11:36:44.681712 44922 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0602 11:36:44.702450 44922 out.go:170] 💾 Downloading Kubernetes v1.20.2 preload ... I0602 11:36:44.702741 44922 preload.go:196] getting checksum for preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 ... I0602 11:36:44.846786 44922 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4?checksum=md5:91e6984243eafcd2b938c7edbc7b7ef6 -> /Users/michihara/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0602 11:37:19.645939 44922 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e as a tarball I0602 11:37:19.667376 44922 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon I0602 11:37:20.318677 44922 preload.go:206] saving checksum for preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 ... I0602 11:37:20.350415 44922 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull I0602 11:37:20.350429 44922 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull I0602 11:37:20.413424 44922 preload.go:218] verifying checksumm of /Users/michihara/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 ... I0602 11:37:21.608758 44922 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker I0602 11:37:21.610862 44922 profile.go:148] Saving config to /Users/michihara/.minikube/profiles/minikube/config.json ... I0602 11:37:21.611148 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/profiles/minikube/config.json: {Name:mk98f0c275afb15fb0983e4c9611754729d0d789 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:37:21.612269 44922 cache.go:194] Successfully downloaded all kic artifacts I0602 11:37:21.613781 44922 start.go:313] acquiring machines lock for minikube: {Name:mk25242bce900b276466ef1956107cc5372556ed Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0602 11:37:21.613853 44922 start.go:317] acquired machines lock for "minikube" in 61.499µs I0602 11:37:21.614139 44922 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3888 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0602 11:37:21.614446 44922 start.go:126] createHost starting for "" (driver="docker") I0602 11:37:21.659301 44922 out.go:197] 🔥 Creating docker container (CPUs=2, Memory=3888MB) ... I0602 11:37:21.660986 44922 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I0602 11:37:21.661017 44922 client.go:168] LocalClient.Create starting I0602 11:37:21.661435 44922 main.go:128] libmachine: Creating CA: /Users/michihara/.minikube/certs/ca.pem I0602 11:37:21.794636 44922 main.go:128] libmachine: Creating client certificate: /Users/michihara/.minikube/certs/cert.pem I0602 11:37:21.993797 44922 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0602 11:37:22.187453 44922 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0602 11:37:22.187984 44922 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs... I0602 11:37:22.188009 44922 cli_runner.go:115] Run: docker network inspect minikube W0602 11:37:22.376382 44922 cli_runner.go:162] docker network inspect minikube returned with exit code 1 I0602 11:37:22.376443 44922 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0602 11:37:22.376466 44922 network_create.go:254] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0602 11:37:22.376942 44922 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0602 11:37:22.573341 44922 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000104a0] misses:0} I0602 11:37:22.573385 44922 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0602 11:37:22.573421 44922 network_create.go:100] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0602 11:37:22.573580 44922 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube I0602 11:37:29.418693 44922 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube: (6.844976262s) I0602 11:37:29.419057 44922 network_create.go:84] docker network minikube 192.168.49.0/24 created I0602 11:37:29.419741 44922 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0602 11:37:29.420298 44922 cli_runner.go:115] Run: docker ps -a --format {{.Names}} I0602 11:37:29.806193 44922 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0602 11:37:30.028987 44922 oci.go:102] Successfully created a docker volume minikube I0602 11:37:30.029344 44922 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib I0602 11:37:31.125962 44922 cli_runner.go:168] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: (1.09653184s) I0602 11:37:31.125984 44922 oci.go:106] Successfully prepared a docker volume minikube I0602 11:37:31.126152 44922 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I0602 11:37:31.126673 44922 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0602 11:37:31.126846 44922 preload.go:106] Found local preload: /Users/michihara/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0602 11:37:31.127125 44922 kic.go:179] Starting extracting preloaded images to volume ... I0602 11:37:31.127353 44922 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/michihara/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir I0602 11:37:32.162759 44922 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.036450003s) I0602 11:37:32.164849 44922 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=3888mb --memory-swap=3888mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e I0602 11:37:43.480988 44922 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/michihara/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (12.353458758s) I0602 11:37:43.481820 44922 kic.go:188] duration metric: took 12.354841 seconds to extract preloaded images to volume I0602 11:37:46.628000 44922 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=3888mb --memory-swap=3888mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e: (14.462943244s) I0602 11:37:46.628350 44922 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}} I0602 11:37:46.922389 44922 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0602 11:37:47.127951 44922 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0602 11:37:47.416896 44922 oci.go:278] the created container "minikube" has a running status. I0602 11:37:47.416945 44922 kic.go:210] Creating ssh key for kic: /Users/michihara/.minikube/machines/minikube/id_rsa... I0602 11:37:47.505860 44922 kic_runner.go:188] docker (temp): /Users/michihara/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0602 11:37:47.826590 44922 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0602 11:37:48.046163 44922 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0602 11:37:48.046177 44922 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0602 11:37:48.363986 44922 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0602 11:37:48.589022 44922 machine.go:88] provisioning docker machine ... I0602 11:37:48.590391 44922 ubuntu.go:169] provisioning hostname "minikube" I0602 11:37:48.591065 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:37:48.818379 44922 main.go:128] libmachine: Using SSH client type: native I0602 11:37:48.819220 44922 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 127.0.0.1 53291 } I0602 11:37:48.819238 44922 main.go:128] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0602 11:37:48.821336 44922 main.go:128] libmachine: Error dialing TCP: ssh: handshake failed: EOF I0602 11:37:52.015478 44922 main.go:128] libmachine: SSH cmd err, output: : minikube I0602 11:37:52.016484 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:37:52.241814 44922 main.go:128] libmachine: Using SSH client type: native I0602 11:37:52.242119 44922 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 127.0.0.1 53291 } I0602 11:37:52.242142 44922 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0602 11:37:52.405594 44922 main.go:128] libmachine: SSH cmd err, output: : I0602 11:37:52.406378 44922 ubuntu.go:175] set auth options {CertDir:/Users/michihara/.minikube CaCertPath:/Users/michihara/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/michihara/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/michihara/.minikube/machines/server.pem ServerKeyPath:/Users/michihara/.minikube/machines/server-key.pem ClientKeyPath:/Users/michihara/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/michihara/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/michihara/.minikube} I0602 11:37:52.406415 44922 ubuntu.go:177] setting up certificates I0602 11:37:52.406425 44922 provision.go:83] configureAuth start I0602 11:37:52.406613 44922 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0602 11:37:52.612344 44922 provision.go:137] copyHostCerts I0602 11:37:52.612455 44922 exec_runner.go:152] cp: /Users/michihara/.minikube/certs/ca.pem --> /Users/michihara/.minikube/ca.pem (1086 bytes) I0602 11:37:52.613086 44922 exec_runner.go:152] cp: /Users/michihara/.minikube/certs/cert.pem --> /Users/michihara/.minikube/cert.pem (1127 bytes) I0602 11:37:52.613631 44922 exec_runner.go:152] cp: /Users/michihara/.minikube/certs/key.pem --> /Users/michihara/.minikube/key.pem (1675 bytes) I0602 11:37:52.613912 44922 provision.go:111] generating server cert: /Users/michihara/.minikube/machines/server.pem ca-key=/Users/michihara/.minikube/certs/ca.pem private-key=/Users/michihara/.minikube/certs/ca-key.pem org=michihara.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0602 11:37:52.714405 44922 provision.go:165] copyRemoteCerts I0602 11:37:52.715534 44922 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0602 11:37:52.715680 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:37:52.933582 44922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53291 SSHKeyPath:/Users/michihara/.minikube/machines/minikube/id_rsa Username:docker} I0602 11:37:53.034558 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1086 bytes) I0602 11:37:53.066421 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes) I0602 11:37:53.092506 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0602 11:37:53.132475 44922 provision.go:86] duration metric: configureAuth took 725.679432ms I0602 11:37:53.132489 44922 ubuntu.go:193] setting minikube options for container-runtime I0602 11:37:53.133025 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:37:53.333330 44922 main.go:128] libmachine: Using SSH client type: native I0602 11:37:53.333625 44922 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 127.0.0.1 53291 } I0602 11:37:53.333642 44922 main.go:128] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0602 11:37:53.491977 44922 main.go:128] libmachine: SSH cmd err, output: : overlay I0602 11:37:53.491992 44922 ubuntu.go:71] root file system type: overlay I0602 11:37:53.492777 44922 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ... I0602 11:37:53.492947 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:37:53.725973 44922 main.go:128] libmachine: Using SSH client type: native I0602 11:37:53.726240 44922 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 127.0.0.1 53291 } I0602 11:37:53.726316 44922 main.go:128] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0602 11:37:53.881547 44922 main.go:128] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0602 11:37:53.882562 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:37:54.113470 44922 main.go:128] libmachine: Using SSH client type: native I0602 11:37:54.113761 44922 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 127.0.0.1 53291 } I0602 11:37:54.113806 44922 main.go:128] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0602 11:38:23.783080 44922 main.go:128] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-04-09 22:45:28.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2021-06-02 15:37:53.881725484 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0602 11:38:23.783151 44922 machine.go:91] provisioned docker machine in 35.19379446s I0602 11:38:23.783160 44922 client.go:171] LocalClient.Create took 1m2.121604805s I0602 11:38:23.783198 44922 start.go:168] duration metric: libmachine.API.Create for "minikube" took 1m2.121676352s I0602 11:38:23.783550 44922 start.go:267] post-start starting for "minikube" (driver="docker") I0602 11:38:23.783561 44922 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0602 11:38:23.784236 44922 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0602 11:38:23.784376 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:38:24.103521 44922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53291 SSHKeyPath:/Users/michihara/.minikube/machines/minikube/id_rsa Username:docker} I0602 11:38:24.211318 44922 ssh_runner.go:149] Run: cat /etc/os-release I0602 11:38:24.217974 44922 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0602 11:38:24.217990 44922 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0602 11:38:24.217997 44922 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0602 11:38:24.218344 44922 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0602 11:38:24.218937 44922 filesync.go:118] Scanning /Users/michihara/.minikube/addons for local assets ... I0602 11:38:24.219435 44922 filesync.go:118] Scanning /Users/michihara/.minikube/files for local assets ... I0602 11:38:24.219519 44922 start.go:270] post-start completed in 435.954841ms I0602 11:38:24.220794 44922 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0602 11:38:24.456815 44922 profile.go:148] Saving config to /Users/michihara/.minikube/profiles/minikube/config.json ... I0602 11:38:24.458244 44922 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0602 11:38:24.458456 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:38:24.702715 44922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53291 SSHKeyPath:/Users/michihara/.minikube/machines/minikube/id_rsa Username:docker} I0602 11:38:24.795126 44922 start.go:129] duration metric: createHost completed in 1m3.180121259s I0602 11:38:24.795139 44922 start.go:80] releasing machines lock for "minikube", held for 1m3.180738344s I0602 11:38:24.796070 44922 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0602 11:38:25.027176 44922 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0602 11:38:25.027514 44922 ssh_runner.go:149] Run: systemctl --version I0602 11:38:25.027720 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:38:25.027933 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:38:25.284741 44922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53291 SSHKeyPath:/Users/michihara/.minikube/machines/minikube/id_rsa Username:docker} I0602 11:38:25.287484 44922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53291 SSHKeyPath:/Users/michihara/.minikube/machines/minikube/id_rsa Username:docker} I0602 11:38:25.395386 44922 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd I0602 11:38:25.637555 44922 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0602 11:38:25.652072 44922 cruntime.go:225] skipping containerd shutdown because we are bound to it I0602 11:38:25.652248 44922 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0602 11:38:25.664934 44922 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0602 11:38:25.683752 44922 ssh_runner.go:149] Run: sudo systemctl unmask docker.service I0602 11:38:25.768955 44922 ssh_runner.go:149] Run: sudo systemctl enable docker.socket I0602 11:38:25.871201 44922 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0602 11:38:25.886000 44922 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0602 11:38:25.992060 44922 ssh_runner.go:149] Run: sudo systemctl start docker I0602 11:38:26.013886 44922 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} I0602 11:38:26.240075 44922 out.go:197] 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... I0602 11:38:26.240653 44922 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal I0602 11:38:26.686828 44922 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2 I0602 11:38:26.687738 44922 ssh_runner.go:149] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I0602 11:38:26.696477 44922 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0602 11:38:26.712913 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0602 11:38:26.953795 44922 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0602 11:38:26.953978 44922 preload.go:106] Found local preload: /Users/michihara/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0602 11:38:26.954267 44922 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0602 11:38:27.007277 44922 docker.go:528] Got preloaded images: -- stdout -- gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I0602 11:38:27.007325 44922 docker.go:465] Images already preloaded, skipping extraction I0602 11:38:27.007551 44922 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0602 11:38:27.075716 44922 docker.go:528] Got preloaded images: -- stdout -- gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I0602 11:38:27.075733 44922 cache_images.go:74] Images are preloaded, skipping loading I0602 11:38:27.076348 44922 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} I0602 11:38:27.408180 44922 cni.go:93] Creating CNI manager for "" I0602 11:38:27.408189 44922 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0602 11:38:27.408698 44922 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0602 11:38:27.408719 44922 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0602 11:38:27.408923 44922 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0602 11:38:27.409563 44922 kubeadm.go:901] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0602 11:38:27.409704 44922 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0602 11:38:27.423231 44922 binaries.go:44] Found k8s binaries, skipping transfer I0602 11:38:27.423367 44922 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0602 11:38:27.438890 44922 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I0602 11:38:27.463368 44922 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0602 11:38:27.482540 44922 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes) I0602 11:38:27.499092 44922 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0602 11:38:27.505946 44922 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0602 11:38:27.518966 44922 certs.go:52] Setting up /Users/michihara/.minikube/profiles/minikube for IP: 192.168.49.2 I0602 11:38:27.519241 44922 certs.go:175] generating minikubeCA CA: /Users/michihara/.minikube/ca.key I0602 11:38:27.652603 44922 crypto.go:157] Writing cert to /Users/michihara/.minikube/ca.crt ... I0602 11:38:27.652634 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/ca.crt: {Name:mk65cb95c2eadc7edf373a6e90779bcd41ff247e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:27.653641 44922 crypto.go:165] Writing key to /Users/michihara/.minikube/ca.key ... I0602 11:38:27.653649 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/ca.key: {Name:mk3b34080b61be48faa4ba59b72dd8da7c2abd60 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:27.654287 44922 certs.go:175] generating proxyClientCA CA: /Users/michihara/.minikube/proxy-client-ca.key I0602 11:38:27.698800 44922 crypto.go:157] Writing cert to /Users/michihara/.minikube/proxy-client-ca.crt ... I0602 11:38:27.698869 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/proxy-client-ca.crt: {Name:mk224df8412005b99c75f8294c563c3eef17e272 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:27.700300 44922 crypto.go:165] Writing key to /Users/michihara/.minikube/proxy-client-ca.key ... I0602 11:38:27.700316 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/proxy-client-ca.key: {Name:mk9fbeaedb237a32bc92eb9ee234dc6cc14a9f3f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:27.702640 44922 certs.go:286] generating minikube-user signed cert: /Users/michihara/.minikube/profiles/minikube/client.key I0602 11:38:27.703091 44922 crypto.go:69] Generating cert /Users/michihara/.minikube/profiles/minikube/client.crt with IP's: [] I0602 11:38:27.809969 44922 crypto.go:157] Writing cert to /Users/michihara/.minikube/profiles/minikube/client.crt ... I0602 11:38:27.809981 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/profiles/minikube/client.crt: {Name:mke3b2a00534cfde6cd9bc278c1eceae7bed70d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:27.810670 44922 crypto.go:165] Writing key to /Users/michihara/.minikube/profiles/minikube/client.key ... I0602 11:38:27.810676 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/profiles/minikube/client.key: {Name:mk2110b29493b80c59de26f638b3caf3645019c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:27.811201 44922 certs.go:286] generating minikube signed cert: /Users/michihara/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0602 11:38:27.811206 44922 crypto.go:69] Generating cert /Users/michihara/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0602 11:38:27.925540 44922 crypto.go:157] Writing cert to /Users/michihara/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0602 11:38:27.925552 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk9d03ac79a6bf78edfef03469a43c8f2c1f8da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:27.928977 44922 crypto.go:165] Writing key to /Users/michihara/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0602 11:38:27.928993 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkf9e2c87dba502b42f870a8ed196bfa899cbf5e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:27.948652 44922 certs.go:297] copying /Users/michihara/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/michihara/.minikube/profiles/minikube/apiserver.crt I0602 11:38:27.949242 44922 certs.go:301] copying /Users/michihara/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/michihara/.minikube/profiles/minikube/apiserver.key I0602 11:38:27.949483 44922 certs.go:286] generating aggregator signed cert: /Users/michihara/.minikube/profiles/minikube/proxy-client.key I0602 11:38:27.949488 44922 crypto.go:69] Generating cert /Users/michihara/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0602 11:38:28.045031 44922 crypto.go:157] Writing cert to /Users/michihara/.minikube/profiles/minikube/proxy-client.crt ... I0602 11:38:28.045042 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/profiles/minikube/proxy-client.crt: {Name:mk77fc42e7eb3d0e238d9ae67a5bc39c3a6e9c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:28.046384 44922 crypto.go:165] Writing key to /Users/michihara/.minikube/profiles/minikube/proxy-client.key ... I0602 11:38:28.046407 44922 lock.go:36] WriteFile acquiring /Users/michihara/.minikube/profiles/minikube/proxy-client.key: {Name:mk84862fe6be98518641c4540f704337d0535937 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:28.047816 44922 certs.go:361] found cert: /Users/michihara/.minikube/certs/Users/michihara/.minikube/certs/ca-key.pem (1679 bytes) I0602 11:38:28.047868 44922 certs.go:361] found cert: /Users/michihara/.minikube/certs/Users/michihara/.minikube/certs/ca.pem (1086 bytes) I0602 11:38:28.047907 44922 certs.go:361] found cert: /Users/michihara/.minikube/certs/Users/michihara/.minikube/certs/cert.pem (1127 bytes) I0602 11:38:28.047949 44922 certs.go:361] found cert: /Users/michihara/.minikube/certs/Users/michihara/.minikube/certs/key.pem (1675 bytes) I0602 11:38:28.056296 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0602 11:38:28.083145 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0602 11:38:28.112544 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0602 11:38:28.139645 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0602 11:38:28.167996 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0602 11:38:28.190387 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0602 11:38:28.214889 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0602 11:38:28.240221 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes) I0602 11:38:28.272668 44922 ssh_runner.go:316] scp /Users/michihara/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0602 11:38:28.300381 44922 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0602 11:38:28.321504 44922 ssh_runner.go:149] Run: openssl version I0602 11:38:28.331647 44922 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0602 11:38:28.343410 44922 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0602 11:38:28.349087 44922 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 Jun 2 15:38 /usr/share/ca-certificates/minikubeCA.pem I0602 11:38:28.349231 44922 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0602 11:38:28.358115 44922 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0602 11:38:28.371525 44922 kubeadm.go:381] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:3888 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0602 11:38:28.371792 44922 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0602 11:38:28.427628 44922 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0602 11:38:28.441027 44922 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0602 11:38:28.458057 44922 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0602 11:38:28.458228 44922 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0602 11:38:28.480789 44922 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0602 11:38:28.480825 44922 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" W0602 11:38:29.553847 44922 out.go:424] no arguments passed for " ▪ Generating certificates and keys ..." - returning raw string W0602 11:38:29.554202 44922 out.go:424] no arguments passed for " ▪ Generating certificates and keys ..." - returning raw string I0602 11:38:29.597159 44922 out.go:197] ▪ Generating certificates and keys ... W0602 11:38:33.123153 44922 out.go:424] no arguments passed for " ▪ Booting up control plane ..." - returning raw string W0602 11:38:33.123186 44922 out.go:424] no arguments passed for " ▪ Booting up control plane ..." - returning raw string I0602 11:38:33.143826 44922 out.go:197] ▪ Booting up control plane ... W0602 11:38:58.185504 44922 out.go:424] no arguments passed for " ▪ Configuring RBAC rules ..." - returning raw string W0602 11:38:58.185527 44922 out.go:424] no arguments passed for " ▪ Configuring RBAC rules ..." - returning raw string I0602 11:38:58.223733 44922 out.go:197] ▪ Configuring RBAC rules ... I0602 11:38:58.680650 44922 cni.go:93] Creating CNI manager for "" I0602 11:38:58.680660 44922 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0602 11:38:58.681026 44922 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0602 11:38:58.681240 44922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_06_02T11_38_58_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0602 11:38:58.681234 44922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0602 11:38:58.736496 44922 ops.go:34] apiserver oom_adj: -16 I0602 11:38:59.164118 44922 kubeadm.go:977] duration metric: took 483.094691ms to wait for elevateKubeSystemPrivileges. I0602 11:38:59.391122 44922 kubeadm.go:383] StartCluster complete in 31.019330637s I0602 11:38:59.391146 44922 settings.go:142] acquiring lock: {Name:mkc4a34738e8ac68342e693b571481bc43538014 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:59.391281 44922 settings.go:150] Updating kubeconfig: /Users/michihara/.kube/config I0602 11:38:59.399963 44922 lock.go:36] WriteFile acquiring /Users/michihara/.kube/config: {Name:mkdf2ab69afc93f4cdaf33c4074ec3984243625a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0602 11:38:59.964531 44922 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0602 11:38:59.964771 44922 start.go:201] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0602 11:38:59.964805 44922 out.go:424] no arguments passed for "🔎 Verifying Kubernetes components...\n" - returning raw string W0602 11:38:59.964854 44922 out.go:424] no arguments passed for "🔎 Verifying Kubernetes components...\n" - returning raw string I0602 11:38:59.965544 44922 addons.go:328] enableAddons start: toEnable=map[], additional=[] I0602 11:38:59.985319 44922 out.go:170] 🔎 Verifying Kubernetes components... I0602 11:38:59.985646 44922 addons.go:55] Setting default-storageclass=true in profile "minikube" I0602 11:38:59.985668 44922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0602 11:38:59.985632 44922 addons.go:55] Setting storage-provisioner=true in profile "minikube" I0602 11:38:59.985700 44922 addons.go:131] Setting addon storage-provisioner=true in "minikube" W0602 11:38:59.985706 44922 addons.go:140] addon storage-provisioner should already be in state true I0602 11:38:59.985716 44922 host.go:66] Checking if "minikube" exists ... I0602 11:38:59.985921 44922 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0602 11:39:00.002463 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0602 11:39:00.015467 44922 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0602 11:39:00.018674 44922 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0602 11:39:00.403656 44922 out.go:170] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0602 11:39:00.403866 44922 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0602 11:39:00.403873 44922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0602 11:39:00.404071 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:39:00.430535 44922 api_server.go:50] waiting for apiserver process to appear ... I0602 11:39:00.430705 44922 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0602 11:39:00.431344 44922 addons.go:131] Setting addon default-storageclass=true in "minikube" W0602 11:39:00.431354 44922 addons.go:140] addon default-storageclass should already be in state true I0602 11:39:00.431366 44922 host.go:66] Checking if "minikube" exists ... I0602 11:39:00.432356 44922 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0602 11:39:00.465728 44922 api_server.go:70] duration metric: took 500.913879ms to wait for apiserver process to appear ... I0602 11:39:00.465757 44922 api_server.go:86] waiting for apiserver healthz status ... I0602 11:39:00.466193 44922 api_server.go:223] Checking apiserver healthz at https://127.0.0.1:53295/healthz ... I0602 11:39:00.487013 44922 api_server.go:249] https://127.0.0.1:53295/healthz returned 200: ok I0602 11:39:00.491452 44922 api_server.go:139] control plane version: v1.20.2 I0602 11:39:00.491471 44922 api_server.go:129] duration metric: took 25.708407ms to wait for apiserver health ... I0602 11:39:00.491484 44922 system_pods.go:43] waiting for kube-system pods to appear ... I0602 11:39:00.506460 44922 system_pods.go:59] 0 kube-system pods found I0602 11:39:00.506509 44922 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up I0602 11:39:00.681908 44922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53291 SSHKeyPath:/Users/michihara/.minikube/machines/minikube/id_rsa Username:docker} I0602 11:39:00.714240 44922 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0602 11:39:00.714252 44922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0602 11:39:00.714446 44922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0602 11:39:00.774408 44922 system_pods.go:59] 0 kube-system pods found I0602 11:39:00.774443 44922 retry.go:31] will retry after 381.329545ms: only 0 pod(s) have shown up I0602 11:39:00.833884 44922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0602 11:39:00.937233 44922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53291 SSHKeyPath:/Users/michihara/.minikube/machines/minikube/id_rsa Username:docker} I0602 11:39:01.083414 44922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0602 11:39:01.161774 44922 system_pods.go:59] 0 kube-system pods found I0602 11:39:01.161797 44922 retry.go:31] will retry after 422.765636ms: only 0 pod(s) have shown up I0602 11:39:01.471876 44922 out.go:170] 🌟 Enabled addons: storage-provisioner, default-storageclass I0602 11:39:01.471903 44922 addons.go:330] enableAddons completed in 1.506836234s I0602 11:39:01.594296 44922 system_pods.go:59] 1 kube-system pods found I0602 11:39:01.594325 44922 system_pods.go:61] "storage-provisioner" [7ceb34a5-4251-4270-bfc7-4c568cec387f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0602 11:39:01.594331 44922 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up I0602 11:39:02.077218 44922 system_pods.go:59] 1 kube-system pods found I0602 11:39:02.077238 44922 system_pods.go:61] "storage-provisioner" [7ceb34a5-4251-4270-bfc7-4c568cec387f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0602 11:39:02.077249 44922 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up I0602 11:39:02.672283 44922 system_pods.go:59] 1 kube-system pods found I0602 11:39:02.672297 44922 system_pods.go:61] "storage-provisioner" [7ceb34a5-4251-4270-bfc7-4c568cec387f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0602 11:39:02.672305 44922 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up I0602 11:39:03.512338 44922 system_pods.go:59] 1 kube-system pods found I0602 11:39:03.512353 44922 system_pods.go:61] "storage-provisioner" [7ceb34a5-4251-4270-bfc7-4c568cec387f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0602 11:39:03.512360 44922 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up I0602 11:39:04.267890 44922 system_pods.go:59] 1 kube-system pods found I0602 11:39:04.267911 44922 system_pods.go:61] "storage-provisioner" [7ceb34a5-4251-4270-bfc7-4c568cec387f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0602 11:39:04.267919 44922 retry.go:31] will retry after 987.362415ms: only 1 pod(s) have shown up I0602 11:39:05.262177 44922 system_pods.go:59] 1 kube-system pods found I0602 11:39:05.262195 44922 system_pods.go:61] "storage-provisioner" [7ceb34a5-4251-4270-bfc7-4c568cec387f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0602 11:39:05.262201 44922 retry.go:31] will retry after 1.189835008s: only 1 pod(s) have shown up I0602 11:39:06.461216 44922 system_pods.go:59] 1 kube-system pods found I0602 11:39:06.461230 44922 system_pods.go:61] "storage-provisioner" [7ceb34a5-4251-4270-bfc7-4c568cec387f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0602 11:39:06.461237 44922 retry.go:31] will retry after 1.677229867s: only 1 pod(s) have shown up I0602 11:39:08.144288 44922 system_pods.go:59] 1 kube-system pods found I0602 11:39:08.144302 44922 system_pods.go:61] "storage-provisioner" [7ceb34a5-4251-4270-bfc7-4c568cec387f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0602 11:39:08.144309 44922 retry.go:31] will retry after 2.346016261s: only 1 pod(s) have shown up I0602 11:39:10.499467 44922 system_pods.go:59] 5 kube-system pods found I0602 11:39:10.499479 44922 system_pods.go:61] "etcd-minikube" [c8a3bfad-354a-44bd-aa0c-92407adfbf20] Pending I0602 11:39:10.499504 44922 system_pods.go:61] "kube-apiserver-minikube" [57aba77d-8385-4c8f-98bf-595b43220fd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0602 11:39:10.499510 44922 system_pods.go:61] "kube-controller-manager-minikube" [54581d9e-9059-47b2-9e89-53d6ddd72e04] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0602 11:39:10.499517 44922 system_pods.go:61] "kube-scheduler-minikube" [14217e06-e2ac-4010-b03a-ae7a067e5e19] Pending I0602 11:39:10.499521 44922 system_pods.go:61] "storage-provisioner" [7ceb34a5-4251-4270-bfc7-4c568cec387f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0602 11:39:10.499525 44922 system_pods.go:74] duration metric: took 10.007952081s to wait for pod list to return data ... I0602 11:39:10.499530 44922 kubeadm.go:538] duration metric: took 10.534637476s to wait for : map[apiserver:true system_pods:true] ... I0602 11:39:10.499540 44922 node_conditions.go:102] verifying NodePressure condition ... I0602 11:39:10.504828 44922 node_conditions.go:122] node storage ephemeral capacity is 61318988Ki I0602 11:39:10.504846 44922 node_conditions.go:123] node cpu capacity is 4 I0602 11:39:10.505175 44922 node_conditions.go:105] duration metric: took 5.515089ms to run NodePressure ... I0602 11:39:10.505183 44922 start.go:206] waiting for startup goroutines ... I0602 11:39:10.658555 44922 start.go:460] kubectl: 1.21.1, cluster: 1.20.2 (minor skew: 1) I0602 11:39:10.679932 44922 out.go:170] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Wed 2021-06-02 15:37:48 UTC, end at Wed 2021-06-02 16:01:12 UTC. -- Jun 02 15:38:01 minikube dockerd[470]: time="2021-06-02T15:38:01.943066459Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jun 02 15:38:01 minikube dockerd[470]: time="2021-06-02T15:38:01.943108556Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jun 02 15:38:01 minikube dockerd[470]: time="2021-06-02T15:38:01.943356690Z" level=info msg="Loading containers: start." Jun 02 15:38:17 minikube dockerd[470]: time="2021-06-02T15:38:17.441757385Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jun 02 15:38:23 minikube dockerd[470]: time="2021-06-02T15:38:23.706324689Z" level=info msg="Loading containers: done." Jun 02 15:38:23 minikube dockerd[470]: time="2021-06-02T15:38:23.735910671Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6 Jun 02 15:38:23 minikube dockerd[470]: time="2021-06-02T15:38:23.736152626Z" level=info msg="Daemon has completed initialization" Jun 02 15:38:23 minikube systemd[1]: Started Docker Application Container Engine. Jun 02 15:38:23 minikube dockerd[470]: time="2021-06-02T15:38:23.799346764Z" level=info msg="API listen on [::]:2376" Jun 02 15:38:23 minikube dockerd[470]: time="2021-06-02T15:38:23.804795939Z" level=info msg="API listen on /var/run/docker.sock" Jun 02 15:39:23 minikube dockerd[470]: time="2021-06-02T15:39:23.095334179Z" level=info msg="ignoring event" container=7316d6edf713725eb4cb10af639de06f9c85c6f65056126a0965bdb8c2c269b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:39:23 minikube dockerd[470]: time="2021-06-02T15:39:23.464107982Z" level=info msg="ignoring event" container=8f392e51dea9b21042ca3d175f5a7cc0af9ee7c064ad383b9224aed4a40a774e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:39:45 minikube dockerd[470]: time="2021-06-02T15:39:45.181801918Z" level=warning msg="reference for unknown type: " digest="sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689" remote="docker.io/jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689" Jun 02 15:39:48 minikube dockerd[470]: time="2021-06-02T15:39:48.669220044Z" level=info msg="ignoring event" container=ae4d35f6ba930222e999ba1245c5dd817d567f8d1fd391bea6a4d96b484be316 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:39:48 minikube dockerd[470]: time="2021-06-02T15:39:48.681094362Z" level=info msg="ignoring event" container=16bcc05b2269156af5bddb3288009917e0f0744d869583efc7d1b7ffbc08a8b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:39:49 minikube dockerd[470]: time="2021-06-02T15:39:49.847673365Z" level=info msg="ignoring event" container=a4f816831ac9ff04e833b3d6339e9e6f29b8b61798380ad9efb56388713a854d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:39:50 minikube dockerd[470]: time="2021-06-02T15:39:50.062750583Z" level=info msg="ignoring event" container=4702174bf8e98101ac568bada376c6dae46988f5e3944411319e5d00f4aabc89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:39:50 minikube dockerd[470]: time="2021-06-02T15:39:50.782649922Z" level=info msg="ignoring event" container=c8126a7e5e242431286c877600ba9a3044153c61ab44d38b75db7716eb4b1b96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:39:52 minikube dockerd[470]: time="2021-06-02T15:39:52.475222635Z" level=warning msg="reference for unknown type: " digest="sha256:4da26a6937e876c80642c98fed9efb2269a5d2cb55029de9e2685c9fd6bc1add" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:4da26a6937e876c80642c98fed9efb2269a5d2cb55029de9e2685c9fd6bc1add" Jun 02 15:41:21 minikube dockerd[470]: time="2021-06-02T15:41:21.169599311Z" level=info msg="ignoring event" container=4edb84c95fe6f86d87712a9d38b02b4c02889585e665faf847af846337faa2a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:41:23 minikube dockerd[470]: time="2021-06-02T15:41:23.139063406Z" level=info msg="Layer sha256:0bf57af2d33c62fe290a2886a81f112ca8615a7679b357a15a1e0107c8ad6a1a cleaned up" Jun 02 15:42:19 minikube dockerd[470]: time="2021-06-02T15:42:19.138211965Z" level=info msg="Container 929ea38d004b229d30c218afe4561877459005631c4ae11c5f90663581267a1f failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:42:19 minikube dockerd[470]: time="2021-06-02T15:42:19.159501446Z" level=info msg="Container 929ea38d004b229d30c218afe4561877459005631c4ae11c5f90663581267a1f failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:42:19 minikube dockerd[470]: time="2021-06-02T15:42:19.246120805Z" level=info msg="ignoring event" container=929ea38d004b229d30c218afe4561877459005631c4ae11c5f90663581267a1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:42:19 minikube dockerd[470]: time="2021-06-02T15:42:19.360008669Z" level=info msg="ignoring event" container=6e48d7e645bf4ab62f9b9f66cadf7ded23e117f53e085fde8aa5f547df3e8e40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:43:45 minikube dockerd[470]: time="2021-06-02T15:43:45.777703616Z" level=info msg="Layer sha256:97c60860f33fffed20bbe2cff7d5ddd2820b9aa7c32ab2563591b73c3a7d1d2a cleaned up" Jun 02 15:43:50 minikube dockerd[470]: time="2021-06-02T15:43:50.289558048Z" level=info msg="Container 4d30e68ef0e83b3378778acd3a9eb31d1b79a6630a8e31c08cfc8779971f04cf failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:43:50 minikube dockerd[470]: time="2021-06-02T15:43:50.330185767Z" level=info msg="Container 4d30e68ef0e83b3378778acd3a9eb31d1b79a6630a8e31c08cfc8779971f04cf failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:43:50 minikube dockerd[470]: time="2021-06-02T15:43:50.359677177Z" level=info msg="ignoring event" container=4d30e68ef0e83b3378778acd3a9eb31d1b79a6630a8e31c08cfc8779971f04cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:43:50 minikube dockerd[470]: time="2021-06-02T15:43:50.386488364Z" level=warning msg="Container 4d30e68ef0e83b3378778acd3a9eb31d1b79a6630a8e31c08cfc8779971f04cf is not running" Jun 02 15:43:50 minikube dockerd[470]: time="2021-06-02T15:43:50.430199096Z" level=info msg="ignoring event" container=7b473021c37a178cb590614a37c9dc0eaa3383de77e28c4e7b469f9c828014c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:44:24 minikube dockerd[470]: time="2021-06-02T15:44:24.107784326Z" level=info msg="Container 2ec6d4772b642787251004b78e12ffe68484b60355be7df5da49153b86c301b5 failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:44:24 minikube dockerd[470]: time="2021-06-02T15:44:24.126220042Z" level=info msg="Container 2ec6d4772b642787251004b78e12ffe68484b60355be7df5da49153b86c301b5 failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:44:24 minikube dockerd[470]: time="2021-06-02T15:44:24.180598460Z" level=info msg="ignoring event" container=2ec6d4772b642787251004b78e12ffe68484b60355be7df5da49153b86c301b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:44:24 minikube dockerd[470]: time="2021-06-02T15:44:24.263779674Z" level=info msg="ignoring event" container=3bf1b9d71215433246e882b6556589122de604ed8e0ab028810108a8b0c43bdc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:44:34 minikube dockerd[470]: time="2021-06-02T15:44:34.713342311Z" level=info msg="ignoring event" container=774fd2bfc5c9944d4cc9249a15263cec27f58844c7776bdcd666dded8a22e03f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:44:34 minikube dockerd[470]: time="2021-06-02T15:44:34.779665573Z" level=info msg="ignoring event" container=8fdbe9c6b4c8f4de751a403e1ed3a06c9d18be515a597b20cc1dd8059371b8fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:45:37 minikube dockerd[470]: time="2021-06-02T15:45:37.576781851Z" level=info msg="Container 13c7b5289181d1ecfe3d1a54b2332394db0e1ecbe0d91a1ebded0c33a60bf511 failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:45:37 minikube dockerd[470]: time="2021-06-02T15:45:37.604368604Z" level=info msg="Container 13c7b5289181d1ecfe3d1a54b2332394db0e1ecbe0d91a1ebded0c33a60bf511 failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:45:37 minikube dockerd[470]: time="2021-06-02T15:45:37.677425296Z" level=info msg="ignoring event" container=13c7b5289181d1ecfe3d1a54b2332394db0e1ecbe0d91a1ebded0c33a60bf511 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:45:37 minikube dockerd[470]: time="2021-06-02T15:45:37.798110677Z" level=info msg="ignoring event" container=0bdc6c5e8285f92a113f934f95619651f6f0a0f76294b636f7ba93e1c8d1da6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:46:05 minikube dockerd[470]: time="2021-06-02T15:46:05.486795068Z" level=info msg="Layer sha256:0e47c701cc04a20eb3ea6ee29dd28deebbabe10eb4814c15f97c69436b1ac974 cleaned up" Jun 02 15:46:09 minikube dockerd[470]: time="2021-06-02T15:46:09.563646576Z" level=info msg="Container c9e8f2a2952479125cc63773392a763fd476bb3e2bfceb73ccee0748993577b2 failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:46:09 minikube dockerd[470]: time="2021-06-02T15:46:09.658112988Z" level=info msg="Container c9e8f2a2952479125cc63773392a763fd476bb3e2bfceb73ccee0748993577b2 failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:46:09 minikube dockerd[470]: time="2021-06-02T15:46:09.663677461Z" level=info msg="ignoring event" container=c9e8f2a2952479125cc63773392a763fd476bb3e2bfceb73ccee0748993577b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:46:09 minikube dockerd[470]: time="2021-06-02T15:46:09.683582701Z" level=warning msg="Container c9e8f2a2952479125cc63773392a763fd476bb3e2bfceb73ccee0748993577b2 is not running" Jun 02 15:46:09 minikube dockerd[470]: time="2021-06-02T15:46:09.799901856Z" level=info msg="ignoring event" container=61e84066d090d5669652f2d9967d8dcdb4189098542725e1232b880a18fc8e1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:48:01 minikube dockerd[470]: time="2021-06-02T15:48:01.570404138Z" level=info msg="Container 3c9649170b4d33cd8642a4fa5d904b37acfdd38f1e6e715fa9c092a716384b46 failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:48:01 minikube dockerd[470]: time="2021-06-02T15:48:01.619081219Z" level=info msg="Container 3c9649170b4d33cd8642a4fa5d904b37acfdd38f1e6e715fa9c092a716384b46 failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:48:01 minikube dockerd[470]: time="2021-06-02T15:48:01.674998321Z" level=info msg="ignoring event" container=3c9649170b4d33cd8642a4fa5d904b37acfdd38f1e6e715fa9c092a716384b46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:48:01 minikube dockerd[470]: time="2021-06-02T15:48:01.811881474Z" level=info msg="ignoring event" container=ba3cebadaa4d3de7c77fde104fbbfd9f8703c1ba075aa014e26b31a4ea0b7d00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:48:19 minikube dockerd[470]: time="2021-06-02T15:48:19.676520721Z" level=info msg="ignoring event" container=dfe8815623e0ecdc73e94c79fbd45f953139d2296672b1c6ef9f14a7109e0100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:48:19 minikube dockerd[470]: time="2021-06-02T15:48:19.718635978Z" level=info msg="ignoring event" container=2b9327d05493d4efc7808a90d614dccdc42214960be8b9c8e7d87cd35267a697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:48:20 minikube dockerd[470]: time="2021-06-02T15:48:20.276370357Z" level=info msg="ignoring event" container=013fd34110cb629ea71cc478c579c346a1c9aed90c5a93e8449331c1eccc32d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:48:21 minikube dockerd[470]: time="2021-06-02T15:48:21.132523387Z" level=info msg="ignoring event" container=0e042a4e9191245781d64bc0ed5a237aff93a900bf7eff093ecc749f6f6f4ad0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:48:21 minikube dockerd[470]: time="2021-06-02T15:48:21.783151718Z" level=info msg="ignoring event" container=529c827a91d51ab86538b7f78f143786dd0a3e267129f45a64c711143188ee52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:51:58 minikube dockerd[470]: time="2021-06-02T15:51:58.251915567Z" level=info msg="Container 44f04b9e4b287984a3318cbb27c6eb2f80a8496998e5c8b307560a612e477e64 failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:51:58 minikube dockerd[470]: time="2021-06-02T15:51:58.269131449Z" level=info msg="Container 44f04b9e4b287984a3318cbb27c6eb2f80a8496998e5c8b307560a612e477e64 failed to exit within 2 seconds of signal 15 - using the force" Jun 02 15:51:58 minikube dockerd[470]: time="2021-06-02T15:51:58.318809418Z" level=info msg="ignoring event" container=44f04b9e4b287984a3318cbb27c6eb2f80a8496998e5c8b307560a612e477e64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 02 15:51:58 minikube dockerd[470]: time="2021-06-02T15:51:58.401051774Z" level=info msg="ignoring event" container=cf3f0627fe96de7ad43f9b3f6a0c00a2a53df64bb0345dd46cdf1d474ffa95cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 6761908a11da7 986720edc400a 47 seconds ago Running cloud-run-secrets-container 0 a3683fbc071c5 993d5bb8e70d4 a760294f7f27c 12 minutes ago Running gcp-auth 0 9225f473c3b41 0e042a4e91912 4d4f44df9f905 12 minutes ago Exited patch 1 529c827a91d51 2b9327d05493d 4d4f44df9f905 12 minutes ago Exited create 0 013fd34110cb6 f2523d5d96d7e 6e38f40d628db 21 minutes ago Running storage-provisioner 2 f88112367532d 8f392e51dea9b 6e38f40d628db 21 minutes ago Exited storage-provisioner 1 f88112367532d 48831e88ca135 43154ddb57a83 21 minutes ago Running kube-proxy 0 91f57c8635847 d7046989067db bfe3a36ebd252 21 minutes ago Running coredns 0 cb04fe65ee040 7aea4ad156c26 ed2c44fbdd78b 22 minutes ago Running kube-scheduler 0 c354f1f789eac e90d9e29f891a a8c2fdb8bf76e 22 minutes ago Running kube-apiserver 0 b7f667ad943aa 68054518f359c 0369cf4303ffd 22 minutes ago Running etcd 0 7a9c90441dc29 4d4ddab196943 a27166429d98e 22 minutes ago Running kube-controller-manager 0 45dc4c94664db * * ==> coredns [d7046989067d] <== * [INFO] plugin/ready: Still waiting on: "kubernetes" .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" E0602 15:39:14.998078 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:14.998217 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:15.000848 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:16.154628 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:16.334773 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:16.554955 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:18.242435 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:18.416097 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:19.018285 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:21.930140 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:21.947851 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:23.185204 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:30.961216 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0602 15:39:31.631308 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_06_02T11_38_58_0700 minikube.k8s.io/version=v1.20.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 02 Jun 2021 15:38:55 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Wed, 02 Jun 2021 16:01:06 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Wed, 02 Jun 2021 15:56:41 +0000 Wed, 02 Jun 2021 15:38:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 02 Jun 2021 15:56:41 +0000 Wed, 02 Jun 2021 15:38:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 02 Jun 2021 15:56:41 +0000 Wed, 02 Jun 2021 15:38:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 02 Jun 2021 15:56:41 +0000 Wed, 02 Jun 2021 15:39:13 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 61318988Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4030792Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 61318988Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4030792Ki pods: 110 System Info: Machine ID: 822f5ed6656e44929f6c2cc5d6881453 System UUID: 1c0272f9-9c96-4677-9a18-63a6b9acbff9 Boot ID: 1da40fe1-a6c8-4849-bdc0-c58f5b5062c5 Kernel Version: 5.10.25-linuxkit OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.6 Kubelet Version: v1.20.2 Kube-Proxy Version: v1.20.2 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default cloud-run-secrets-54f79967db-bkthh 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 49s gcp-auth gcp-auth-5b7b89555f-27wgp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12m kube-system coredns-74ff55c5b-sxlx4 100m (2%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 21m kube-system etcd-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 22m kube-system kube-apiserver-minikube 250m (6%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 22m kube-system kube-controller-manager-minikube 200m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 22m kube-system kube-proxy-4dc62 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 21m kube-system kube-scheduler-minikube 100m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 22m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 22m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (18%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (4%!)(MISSING) 170Mi (4%!)(MISSING) ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 22m (x4 over 22m) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 22m (x4 over 22m) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 22m (x4 over 22m) kubelet Node minikube status is now: NodeHasSufficientPID Normal Starting 22m kubelet Starting kubelet. Normal NodeHasSufficientMemory 22m kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 22m kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 22m kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeNotReady 22m kubelet Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 22m kubelet Updated Node Allocatable limit across pods Normal NodeReady 22m kubelet Node minikube status is now: NodeReady Warning readOnlySysFS 21m kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal Starting 21m kube-proxy Starting kube-proxy. * * ==> dmesg <== * [ +0.025262] bpfilter: read fail 0 [ +0.026752] bpfilter: read fail 0 [ +0.033793] bpfilter: read fail 0 [ +0.034500] bpfilter: read fail 0 [ +0.029352] bpfilter: read fail 0 [ +0.035683] bpfilter: read fail 0 [ +0.032565] bpfilter: read fail 0 [ +0.031352] bpfilter: read fail 0 [ +0.037691] bpfilter: write fail -32 [ +0.025004] bpfilter: write fail -32 [ +0.037483] bpfilter: write fail -32 [ +0.027619] bpfilter: read fail 0 [ +0.029766] bpfilter: read fail 0 [ +0.039682] bpfilter: read fail 0 [ +0.025666] bpfilter: write fail -32 [ +0.024661] bpfilter: read fail 0 [ +0.017366] bpfilter: write fail -32 [ +0.025605] bpfilter: read fail 0 [ +0.026499] bpfilter: read fail 0 [ +0.033226] bpfilter: read fail 0 [ +0.034439] bpfilter: read fail 0 [ +0.028858] bpfilter: read fail 0 [ +0.018872] bpfilter: read fail 0 [ +0.020381] bpfilter: read fail 0 [ +0.021005] bpfilter: write fail -32 [ +0.025952] bpfilter: write fail -32 [ +0.031690] bpfilter: read fail 0 [ +0.041567] bpfilter: write fail -32 [ +0.024632] bpfilter: read fail 0 [ +0.022847] bpfilter: write fail -32 [ +0.030181] bpfilter: write fail -32 [ +0.027492] bpfilter: write fail -32 [ +0.026212] bpfilter: read fail 0 [ +0.022083] bpfilter: read fail 0 [ +0.030416] bpfilter: read fail 0 [ +0.030681] bpfilter: read fail 0 [ +0.031830] bpfilter: write fail -32 [ +0.031656] bpfilter: read fail 0 [ +0.028446] bpfilter: write fail -32 [ +0.025978] bpfilter: write fail -32 [ +0.033465] bpfilter: read fail 0 [ +0.024838] bpfilter: read fail 0 [ +0.032919] bpfilter: read fail 0 [ +0.024152] bpfilter: read fail 0 [ +0.038714] bpfilter: read fail 0 [ +0.032384] bpfilter: read fail 0 [ +0.033001] bpfilter: write fail -32 [ +0.024977] bpfilter: read fail 0 [ +0.026702] bpfilter: read fail 0 [ +0.038527] bpfilter: read fail 0 [ +0.039471] bpfilter: read fail 0 [ +0.311829] bpfilter: write fail -32 [ +0.037458] bpfilter: read fail 0 [ +0.025164] bpfilter: read fail 0 [ +7.008632] bpfilter: read fail 0 [ +0.029818] bpfilter: read fail 0 [ +0.035386] bpfilter: write fail -32 [Jun 2 16:01] bpfilter: read fail 0 [ +0.025700] bpfilter: read fail 0 [ +0.028399] bpfilter: write fail -32 * * ==> etcd [68054518f359] <== * 2021-06-02 15:51:54.847927 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:52:04.847514 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:52:14.847688 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:52:24.828076 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:52:34.827149 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:52:44.827248 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:52:54.807326 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:53:04.806999 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:53:14.808103 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:53:24.788229 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:53:34.786443 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:53:44.785573 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:53:49.507023 I | mvcc: store.index: compact 1378 2021-06-02 15:53:49.508744 I | mvcc: finished scheduled compaction at 1378 (took 959.562µs) 2021-06-02 15:53:54.766979 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:54:04.766449 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:54:14.764778 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:54:24.744037 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:54:34.744330 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:54:44.743719 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:54:54.723889 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:55:04.722141 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:55:14.722081 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:55:24.703471 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:55:34.702711 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:55:44.701519 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:55:54.682217 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:56:04.680866 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:56:14.681358 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:56:24.660634 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:56:34.660479 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:56:44.661273 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:56:54.639362 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:57:04.641632 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:57:14.639892 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:57:24.619121 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:57:34.618300 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:57:44.620059 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:57:54.599225 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:58:04.599628 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:58:14.597916 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:58:24.578233 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:58:34.577654 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:58:44.577782 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:58:49.301833 I | mvcc: store.index: compact 1630 2021-06-02 15:58:49.302699 I | mvcc: finished scheduled compaction at 1630 (took 513.255µs) 2021-06-02 15:58:54.556201 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:59:04.556327 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:59:14.555947 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:59:24.534980 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:59:34.536079 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:59:44.535418 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 15:59:54.515711 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 16:00:04.513892 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 16:00:14.514557 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 16:00:24.496384 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 16:00:34.492420 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 16:00:44.493221 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 16:00:54.472091 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-06-02 16:01:04.473818 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> kernel <== * 16:01:13 up 20:16, 0 users, load average: 0.47, 0.53, 0.90 Linux minikube 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [e90d9e29f891] <== * I0602 15:51:40.303092 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:51:40.303104 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:51:53.640841 1 trace.go:205] Trace[1696409163]: "Get" url:/api/v1/namespaces/default/pods/cloud-run-secrets-8468b47865-sm65v/log,user-agent:kubectl/v1.21.1 (darwin/amd64) kubernetes/5e58841,client:192.168.49.1 (02-Jun-2021 15:48:59.983) (total time: 173782ms): Trace[1696409163]: ---"Transformed response object" 173778ms (15:51:00.640) Trace[1696409163]: [2m53.782394229s] [2m53.782394229s] END I0602 15:52:19.821037 1 client.go:360] parsed scheme: "passthrough" I0602 15:52:19.821138 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:52:19.821145 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:53:00.932476 1 client.go:360] parsed scheme: "passthrough" I0602 15:53:00.932554 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:53:00.932564 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:53:06.427564 1 trace.go:205] Trace[1312005486]: "GuaranteedUpdate etcd3" type:*coordination.Lease (02-Jun-2021 15:53:05.479) (total time: 948ms): Trace[1312005486]: ---"Transaction committed" 947ms (15:53:00.427) Trace[1312005486]: [948.154751ms] [948.154751ms] END I0602 15:53:06.427697 1 trace.go:205] Trace[1405204374]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.49.2 (02-Jun-2021 15:53:05.479) (total time: 948ms): Trace[1405204374]: ---"Object stored in database" 948ms (15:53:00.427) Trace[1405204374]: [948.560776ms] [948.560776ms] END I0602 15:53:34.271065 1 client.go:360] parsed scheme: "passthrough" I0602 15:53:34.271153 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:53:34.271168 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:54:14.068593 1 client.go:360] parsed scheme: "passthrough" I0602 15:54:14.068637 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:54:14.068642 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:54:58.385165 1 client.go:360] parsed scheme: "passthrough" I0602 15:54:58.385225 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:54:58.385232 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:55:34.280921 1 client.go:360] parsed scheme: "passthrough" I0602 15:55:34.281081 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:55:34.281091 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:56:06.573702 1 client.go:360] parsed scheme: "passthrough" I0602 15:56:06.573734 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:56:06.573741 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:56:47.450555 1 client.go:360] parsed scheme: "passthrough" I0602 15:56:47.450613 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:56:47.450623 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:57:23.311696 1 client.go:360] parsed scheme: "passthrough" I0602 15:57:23.311841 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:57:23.311862 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:58:06.002225 1 client.go:360] parsed scheme: "passthrough" I0602 15:58:06.002337 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:58:06.002351 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:58:37.821920 1 client.go:360] parsed scheme: "passthrough" I0602 15:58:37.821993 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:58:37.822001 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:59:15.285715 1 client.go:360] parsed scheme: "passthrough" I0602 15:59:15.285916 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:59:15.285932 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 15:59:49.890218 1 client.go:360] parsed scheme: "passthrough" I0602 15:59:49.890257 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 15:59:49.890262 1 clientconn.go:948] ClientConn switching balancer to "pick_first" E0602 16:00:24.542334 1 fieldmanager.go:186] [SHOULD NOT HAPPEN] failed to update managedFields for /, Kind=: failed to convert new object (/v1, Kind=Pod) to smd typed: errors: .spec.containers[name="cloud-run-secrets-container"].env: duplicate entries for key [name="GOOGLE_APPLICATION_CREDENTIALS"] .spec.imagePullSecrets: duplicate entries for key [name="gcp-auth"] .spec.imagePullSecrets: duplicate entries for key [name="gcp-auth"] I0602 16:00:34.752900 1 client.go:360] parsed scheme: "passthrough" I0602 16:00:34.752936 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 16:00:34.752942 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0602 16:01:05.184942 1 client.go:360] parsed scheme: "passthrough" I0602 16:01:05.185009 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0602 16:01:05.185017 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-controller-manager [4d4ddab19694] <== * I0602 15:39:14.047168 1 shared_informer.go:247] Caches are synced for ReplicaSet I0602 15:39:14.047957 1 shared_informer.go:247] Caches are synced for endpoint I0602 15:39:14.048106 1 shared_informer.go:247] Caches are synced for GC I0602 15:39:14.048153 1 shared_informer.go:247] Caches are synced for PVC protection I0602 15:39:14.048174 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4dc62" I0602 15:39:14.048402 1 shared_informer.go:247] Caches are synced for taint I0602 15:39:14.048475 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W0602 15:39:14.048574 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I0602 15:39:14.048625 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal. I0602 15:39:14.048660 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0602 15:39:14.048871 1 taint_manager.go:187] Starting NoExecuteTaintManager I0602 15:39:14.054444 1 shared_informer.go:247] Caches are synced for deployment I0602 15:39:14.073529 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1" I0602 15:39:14.077672 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-sxlx4" I0602 15:39:14.080171 1 shared_informer.go:247] Caches are synced for job I0602 15:39:14.080255 1 shared_informer.go:247] Caches are synced for HPA I0602 15:39:14.097142 1 shared_informer.go:247] Caches are synced for persistent volume I0602 15:39:14.097142 1 request.go:655] Throttling request took 1.047624314s, request: GET:https://192.168.49.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s I0602 15:39:14.099728 1 shared_informer.go:247] Caches are synced for stateful set I0602 15:39:14.151575 1 shared_informer.go:247] Caches are synced for resource quota I0602 15:39:14.256068 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0602 15:39:14.548701 1 shared_informer.go:247] Caches are synced for garbage collector I0602 15:39:14.548735 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0602 15:39:14.556303 1 shared_informer.go:247] Caches are synced for garbage collector I0602 15:39:14.898642 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0602 15:39:14.898672 1 shared_informer.go:247] Caches are synced for resource quota I0602 15:39:44.196045 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-create-9mrhq" I0602 15:39:44.218313 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-5b7b89555f to 1" I0602 15:39:44.230244 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-5b7b89555f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-5b7b89555f-4hwf8" I0602 15:39:44.240458 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-patch-n9qcf" I0602 15:39:49.680285 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0602 15:39:50.724377 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0602 15:41:26.148741 1 event.go:291] "Event occurred" object="default/cloud-run-secrets" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cloud-run-secrets-7f8dcdf876 to 1" I0602 15:41:26.258918 1 event.go:291] "Event occurred" object="default/cloud-run-secrets-7f8dcdf876" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cloud-run-secrets-7f8dcdf876-5b8vg" I0602 15:42:43.914622 1 event.go:291] "Event occurred" object="default/cloud-run-secrets" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cloud-run-secrets-788846968b to 1" I0602 15:42:43.926569 1 event.go:291] "Event occurred" object="default/cloud-run-secrets-788846968b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cloud-run-secrets-788846968b-qk5vl" I0602 15:43:46.469555 1 event.go:291] "Event occurred" object="default/cloud-run-secrets" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cloud-run-secrets-7b5ffbd6d7 to 1" I0602 15:43:46.490773 1 event.go:291] "Event occurred" object="default/cloud-run-secrets-7b5ffbd6d7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cloud-run-secrets-7b5ffbd6d7-wnnqr" I0602 15:43:48.207937 1 event.go:291] "Event occurred" object="default/cloud-run-secrets" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set cloud-run-secrets-788846968b to 0" I0602 15:43:48.250981 1 event.go:291] "Event occurred" object="default/cloud-run-secrets-788846968b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: cloud-run-secrets-788846968b-qk5vl" E0602 15:44:39.389030 1 tokens_controller.go:262] error synchronizing serviceaccount gcp-auth/default: secrets "default-token-86mjc" is forbidden: unable to create new content in namespace gcp-auth because it is being terminated I0602 15:44:44.579284 1 namespace_controller.go:185] Namespace has been deleted gcp-auth I0602 15:44:57.506865 1 event.go:291] "Event occurred" object="default/cloud-run-secrets" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cloud-run-secrets-5d7dd8975c to 1" I0602 15:44:57.527851 1 event.go:291] "Event occurred" object="default/cloud-run-secrets-5d7dd8975c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cloud-run-secrets-5d7dd8975c-7hqzv" I0602 15:45:39.494991 1 event.go:291] "Event occurred" object="default/cloud-run-secrets" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cloud-run-secrets-74884c488d to 1" I0602 15:45:39.510538 1 event.go:291] "Event occurred" object="default/cloud-run-secrets-74884c488d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cloud-run-secrets-74884c488d-hrnfs" I0602 15:46:06.060369 1 event.go:291] "Event occurred" object="default/cloud-run-secrets" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cloud-run-secrets-6d7b5dbbb5 to 1" I0602 15:46:06.065882 1 event.go:291] "Event occurred" object="default/cloud-run-secrets-6d7b5dbbb5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cloud-run-secrets-6d7b5dbbb5-t28qn" I0602 15:46:07.472606 1 event.go:291] "Event occurred" object="default/cloud-run-secrets" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set cloud-run-secrets-74884c488d to 0" I0602 15:46:07.502373 1 event.go:291] "Event occurred" object="default/cloud-run-secrets-74884c488d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: cloud-run-secrets-74884c488d-hrnfs" I0602 15:48:18.034099 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-create-tmr5h" I0602 15:48:18.054681 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-5b7b89555f to 1" I0602 15:48:18.075221 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-5b7b89555f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-5b7b89555f-27wgp" I0602 15:48:18.139636 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-patch-z8lxk" I0602 15:48:20.140367 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0602 15:48:21.722828 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0602 15:48:57.622464 1 event.go:291] "Event occurred" object="default/cloud-run-secrets" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cloud-run-secrets-8468b47865 to 1" I0602 15:48:57.650473 1 event.go:291] "Event occurred" object="default/cloud-run-secrets-8468b47865" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cloud-run-secrets-8468b47865-sm65v" I0602 16:00:24.405969 1 event.go:291] "Event occurred" object="default/cloud-run-secrets" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set cloud-run-secrets-54f79967db to 1" I0602 16:00:24.470545 1 event.go:291] "Event occurred" object="default/cloud-run-secrets-54f79967db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cloud-run-secrets-54f79967db-bkthh" * * ==> kube-proxy [48831e88ca13] <== * I0602 15:39:18.781410 1 shared_informer.go:247] Caches are synced for service config I0602 15:39:18.781445 1 shared_informer.go:247] Caches are synced for endpoint slice config I0602 15:39:33.112947 1 trace.go:205] Trace[876016228]: "iptables restore" (02-Jun-2021 15:39:30.844) (total time: 2268ms): Trace[876016228]: [2.268695921s] [2.268695921s] END I0602 15:39:52.690484 1 trace.go:205] Trace[151358766]: "iptables restore" (02-Jun-2021 15:39:50.426) (total time: 2263ms): Trace[151358766]: [2.263635007s] [2.263635007s] END I0602 15:40:02.297383 1 trace.go:205] Trace[1721878133]: "iptables restore" (02-Jun-2021 15:40:00.079) (total time: 2217ms): Trace[1721878133]: [2.217926988s] [2.217926988s] END I0602 15:40:11.758950 1 trace.go:205] Trace[929353386]: "iptables restore" (02-Jun-2021 15:40:09.500) (total time: 2258ms): Trace[929353386]: [2.258251806s] [2.258251806s] END I0602 15:41:35.540150 1 trace.go:205] Trace[1318365663]: "iptables restore" (02-Jun-2021 15:41:33.304) (total time: 2235ms): Trace[1318365663]: [2.235239885s] [2.235239885s] END I0602 15:41:45.026580 1 trace.go:205] Trace[934149895]: "iptables restore" (02-Jun-2021 15:41:42.837) (total time: 2189ms): Trace[934149895]: [2.189378475s] [2.189378475s] END I0602 15:42:26.280967 1 trace.go:205] Trace[1708070884]: "iptables restore" (02-Jun-2021 15:42:24.108) (total time: 2172ms): Trace[1708070884]: [2.17243153s] [2.17243153s] END I0602 15:42:35.897253 1 trace.go:205] Trace[1898247028]: "iptables restore" (02-Jun-2021 15:42:33.641) (total time: 2255ms): Trace[1898247028]: [2.255653819s] [2.255653819s] END I0602 15:42:53.141114 1 trace.go:205] Trace[1541356321]: "iptables restore" (02-Jun-2021 15:42:50.867) (total time: 2273ms): Trace[1541356321]: [2.27379888s] [2.27379888s] END I0602 15:43:03.182683 1 trace.go:205] Trace[989179276]: "iptables restore" (02-Jun-2021 15:43:01.059) (total time: 2123ms): Trace[989179276]: [2.123065741s] [2.123065741s] END I0602 15:43:57.905567 1 trace.go:205] Trace[2022892641]: "iptables restore" (02-Jun-2021 15:43:55.583) (total time: 2321ms): Trace[2022892641]: [2.321526214s] [2.321526214s] END I0602 15:44:07.674337 1 trace.go:205] Trace[158799990]: "iptables restore" (02-Jun-2021 15:44:05.397) (total time: 2276ms): Trace[158799990]: [2.276352892s] [2.276352892s] END I0602 15:44:31.440128 1 trace.go:205] Trace[373447046]: "iptables restore" (02-Jun-2021 15:44:29.052) (total time: 2387ms): Trace[373447046]: [2.387727461s] [2.387727461s] END I0602 15:44:40.244920 1 trace.go:205] Trace[1026746661]: "iptables restore" (02-Jun-2021 15:44:38.176) (total time: 2068ms): Trace[1026746661]: [2.068058876s] [2.068058876s] END I0602 15:44:49.990806 1 trace.go:205] Trace[1283805753]: "iptables restore" (02-Jun-2021 15:44:47.725) (total time: 2265ms): Trace[1283805753]: [2.265437261s] [2.265437261s] END I0602 15:45:06.862389 1 trace.go:205] Trace[1976118359]: "iptables restore" (02-Jun-2021 15:45:04.552) (total time: 2309ms): Trace[1976118359]: [2.309871904s] [2.309871904s] END I0602 15:45:16.887888 1 trace.go:205] Trace[196276995]: "iptables restore" (02-Jun-2021 15:45:14.677) (total time: 2210ms): Trace[196276995]: [2.210494446s] [2.210494446s] END I0602 15:45:44.019267 1 trace.go:205] Trace[1028420413]: "iptables restore" (02-Jun-2021 15:45:41.838) (total time: 2180ms): Trace[1028420413]: [2.180364097s] [2.180364097s] END I0602 15:45:53.971418 1 trace.go:205] Trace[881440519]: "iptables restore" (02-Jun-2021 15:45:51.559) (total time: 2411ms): Trace[881440519]: [2.411492365s] [2.411492365s] END I0602 15:46:16.473294 1 trace.go:205] Trace[573368243]: "iptables restore" (02-Jun-2021 15:46:14.233) (total time: 2239ms): Trace[573368243]: [2.239891256s] [2.239891256s] END I0602 15:46:26.091118 1 trace.go:205] Trace[1921487150]: "iptables restore" (02-Jun-2021 15:46:23.686) (total time: 2404ms): Trace[1921487150]: [2.404410272s] [2.404410272s] END I0602 15:48:08.768446 1 trace.go:205] Trace[780806103]: "iptables restore" (02-Jun-2021 15:48:06.585) (total time: 2182ms): Trace[780806103]: [2.182832926s] [2.182832926s] END I0602 15:48:27.146876 1 trace.go:205] Trace[1014464070]: "iptables restore" (02-Jun-2021 15:48:24.755) (total time: 2391ms): Trace[1014464070]: [2.391774119s] [2.391774119s] END I0602 15:48:36.542823 1 trace.go:205] Trace[1544588419]: "iptables restore" (02-Jun-2021 15:48:34.230) (total time: 2312ms): Trace[1544588419]: [2.312196365s] [2.312196365s] END I0602 15:49:06.728079 1 trace.go:205] Trace[1914834403]: "iptables restore" (02-Jun-2021 15:49:04.446) (total time: 2281ms): Trace[1914834403]: [2.281085584s] [2.281085584s] END I0602 15:49:16.575775 1 trace.go:205] Trace[1175403939]: "iptables restore" (02-Jun-2021 15:49:14.322) (total time: 2253ms): Trace[1175403939]: [2.25329984s] [2.25329984s] END I0602 15:52:04.958568 1 trace.go:205] Trace[1397941914]: "iptables restore" (02-Jun-2021 15:52:02.699) (total time: 2258ms): Trace[1397941914]: [2.258858257s] [2.258858257s] END I0602 16:00:33.647626 1 trace.go:205] Trace[345595002]: "iptables restore" (02-Jun-2021 16:00:31.267) (total time: 2379ms): Trace[345595002]: [2.379585142s] [2.379585142s] END I0602 16:00:42.983441 1 trace.go:205] Trace[1951827482]: "iptables restore" (02-Jun-2021 16:00:40.856) (total time: 2126ms): Trace[1951827482]: [2.126962934s] [2.126962934s] END * * ==> kube-scheduler [7aea4ad156c2] <== * I0602 15:38:50.265083 1 serving.go:331] Generated self-signed cert in-memory W0602 15:38:55.466841 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0602 15:38:55.468929 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0602 15:38:55.469128 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous. W0602 15:38:55.469164 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0602 15:38:55.562845 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0602 15:38:55.564604 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0602 15:38:55.564633 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0602 15:38:55.564850 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0602 15:38:55.571472 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0602 15:38:55.572082 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0602 15:38:55.575516 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0602 15:38:55.580016 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0602 15:38:55.580090 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0602 15:38:55.580215 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0602 15:38:55.580282 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0602 15:38:55.580345 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0602 15:38:55.580439 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0602 15:38:55.580509 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0602 15:38:55.581584 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0602 15:38:55.581703 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0602 15:38:56.417433 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0602 15:38:56.525511 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0602 15:38:56.543712 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0602 15:38:56.544854 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0602 15:38:56.922637 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0602 15:38:58.865377 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Wed 2021-06-02 15:37:48 UTC, end at Wed 2021-06-02 16:01:13 UTC. -- Jun 02 15:48:18 minikube kubelet[2418]: I0602 15:48:18.253410 2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-288gj" (UniqueName: "kubernetes.io/secret/365bfa83-b229-44be-b535-f46225322c20-default-token-288gj") pod "gcp-auth-5b7b89555f-27wgp" (UID: "365bfa83-b229-44be-b535-f46225322c20") Jun 02 15:48:18 minikube kubelet[2418]: I0602 15:48:18.253938 2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/365bfa83-b229-44be-b535-f46225322c20-gcp-project") pod "gcp-auth-5b7b89555f-27wgp" (UID: "365bfa83-b229-44be-b535-f46225322c20") Jun 02 15:48:18 minikube kubelet[2418]: I0602 15:48:18.254144 2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-gcp-auth-certs-token-n49h4" (UniqueName: "kubernetes.io/secret/a1c00238-f879-4cc7-9887-ed873a69c952-minikube-gcp-auth-certs-token-n49h4") pod "gcp-auth-certs-patch-z8lxk" (UID: "a1c00238-f879-4cc7-9887-ed873a69c952") Jun 02 15:48:18 minikube kubelet[2418]: I0602 15:48:18.254321 2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/365bfa83-b229-44be-b535-f46225322c20-webhook-certs") pod "gcp-auth-5b7b89555f-27wgp" (UID: "365bfa83-b229-44be-b535-f46225322c20") Jun 02 15:48:18 minikube kubelet[2418]: E0602 15:48:18.357739 2418 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Jun 02 15:48:18 minikube kubelet[2418]: E0602 15:48:18.358084 2418 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/365bfa83-b229-44be-b535-f46225322c20-webhook-certs podName:365bfa83-b229-44be-b535-f46225322c20 nodeName:}" failed. No retries permitted until 2021-06-02 15:48:18.858013185 +0000 UTC m=+560.668523968 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/365bfa83-b229-44be-b535-f46225322c20-webhook-certs\") pod \"gcp-auth-5b7b89555f-27wgp\" (UID: \"365bfa83-b229-44be-b535-f46225322c20\") : secret \"gcp-auth-certs\" not found" Jun 02 15:48:18 minikube kubelet[2418]: E0602 15:48:18.860169 2418 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Jun 02 15:48:18 minikube kubelet[2418]: E0602 15:48:18.860477 2418 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/365bfa83-b229-44be-b535-f46225322c20-webhook-certs podName:365bfa83-b229-44be-b535-f46225322c20 nodeName:}" failed. No retries permitted until 2021-06-02 15:48:19.860409815 +0000 UTC m=+561.670920604 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/365bfa83-b229-44be-b535-f46225322c20-webhook-certs\") pod \"gcp-auth-5b7b89555f-27wgp\" (UID: \"365bfa83-b229-44be-b535-f46225322c20\") : secret \"gcp-auth-certs\" not found" Jun 02 15:48:19 minikube kubelet[2418]: W0602 15:48:19.095616 2418 pod_container_deletor.go:79] Container "013fd34110cb629ea71cc478c579c346a1c9aed90c5a93e8449331c1eccc32d0" not found in pod's containers Jun 02 15:48:19 minikube kubelet[2418]: W0602 15:48:19.111730 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-certs-create-tmr5h through plugin: invalid network status for Jun 02 15:48:19 minikube kubelet[2418]: W0602 15:48:19.244530 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-certs-patch-z8lxk through plugin: invalid network status for Jun 02 15:48:20 minikube kubelet[2418]: W0602 15:48:20.113808 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-certs-create-tmr5h through plugin: invalid network status for Jun 02 15:48:20 minikube kubelet[2418]: I0602 15:48:20.121403 2418 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2b9327d05493d4efc7808a90d614dccdc42214960be8b9c8e7d87cd35267a697 Jun 02 15:48:20 minikube kubelet[2418]: I0602 15:48:20.268445 2418 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-gcp-auth-certs-token-n49h4" (UniqueName: "kubernetes.io/secret/758d6cad-fa06-4da7-b454-7a21fa4b12a3-minikube-gcp-auth-certs-token-n49h4") pod "758d6cad-fa06-4da7-b454-7a21fa4b12a3" (UID: "758d6cad-fa06-4da7-b454-7a21fa4b12a3") Jun 02 15:48:20 minikube kubelet[2418]: I0602 15:48:20.274267 2418 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758d6cad-fa06-4da7-b454-7a21fa4b12a3-minikube-gcp-auth-certs-token-n49h4" (OuterVolumeSpecName: "minikube-gcp-auth-certs-token-n49h4") pod "758d6cad-fa06-4da7-b454-7a21fa4b12a3" (UID: "758d6cad-fa06-4da7-b454-7a21fa4b12a3"). InnerVolumeSpecName "minikube-gcp-auth-certs-token-n49h4". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 02 15:48:20 minikube kubelet[2418]: I0602 15:48:20.370203 2418 reconciler.go:319] Volume detached for volume "minikube-gcp-auth-certs-token-n49h4" (UniqueName: "kubernetes.io/secret/758d6cad-fa06-4da7-b454-7a21fa4b12a3-minikube-gcp-auth-certs-token-n49h4") on node "minikube" DevicePath "" Jun 02 15:48:20 minikube kubelet[2418]: W0602 15:48:20.657103 2418 pod_container_deletor.go:79] Container "9225f473c3b41fc9047666060c9cc123870e3ace1e82d29285d1d797faf55655" not found in pod's containers Jun 02 15:48:20 minikube kubelet[2418]: W0602 15:48:20.662387 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-certs-patch-z8lxk through plugin: invalid network status for Jun 02 15:48:20 minikube kubelet[2418]: W0602 15:48:20.668374 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-5b7b89555f-27wgp through plugin: invalid network status for Jun 02 15:48:20 minikube kubelet[2418]: I0602 15:48:20.668869 2418 scope.go:95] [topologymanager] RemoveContainer - Container ID: dfe8815623e0ecdc73e94c79fbd45f953139d2296672b1c6ef9f14a7109e0100 Jun 02 15:48:21 minikube kubelet[2418]: W0602 15:48:21.681447 2418 pod_container_deletor.go:79] Container "013fd34110cb629ea71cc478c579c346a1c9aed90c5a93e8449331c1eccc32d0" not found in pod's containers Jun 02 15:48:21 minikube kubelet[2418]: W0602 15:48:21.684281 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-5b7b89555f-27wgp through plugin: invalid network status for Jun 02 15:48:21 minikube kubelet[2418]: W0602 15:48:21.692241 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-certs-patch-z8lxk through plugin: invalid network status for Jun 02 15:48:21 minikube kubelet[2418]: I0602 15:48:21.713022 2418 scope.go:95] [topologymanager] RemoveContainer - Container ID: dfe8815623e0ecdc73e94c79fbd45f953139d2296672b1c6ef9f14a7109e0100 Jun 02 15:48:21 minikube kubelet[2418]: I0602 15:48:21.713395 2418 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0e042a4e9191245781d64bc0ed5a237aff93a900bf7eff093ecc749f6f6f4ad0 Jun 02 15:48:22 minikube kubelet[2418]: I0602 15:48:22.285177 2418 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-gcp-auth-certs-token-n49h4" (UniqueName: "kubernetes.io/secret/a1c00238-f879-4cc7-9887-ed873a69c952-minikube-gcp-auth-certs-token-n49h4") pod "a1c00238-f879-4cc7-9887-ed873a69c952" (UID: "a1c00238-f879-4cc7-9887-ed873a69c952") Jun 02 15:48:22 minikube kubelet[2418]: I0602 15:48:22.288720 2418 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c00238-f879-4cc7-9887-ed873a69c952-minikube-gcp-auth-certs-token-n49h4" (OuterVolumeSpecName: "minikube-gcp-auth-certs-token-n49h4") pod "a1c00238-f879-4cc7-9887-ed873a69c952" (UID: "a1c00238-f879-4cc7-9887-ed873a69c952"). InnerVolumeSpecName "minikube-gcp-auth-certs-token-n49h4". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 02 15:48:22 minikube kubelet[2418]: I0602 15:48:22.385598 2418 reconciler.go:319] Volume detached for volume "minikube-gcp-auth-certs-token-n49h4" (UniqueName: "kubernetes.io/secret/a1c00238-f879-4cc7-9887-ed873a69c952-minikube-gcp-auth-certs-token-n49h4") on node "minikube" DevicePath "" Jun 02 15:48:22 minikube kubelet[2418]: W0602 15:48:22.723296 2418 pod_container_deletor.go:79] Container "529c827a91d51ab86538b7f78f143786dd0a3e267129f45a64c711143188ee52" not found in pod's containers Jun 02 15:48:57 minikube kubelet[2418]: I0602 15:48:57.657824 2418 topology_manager.go:187] [topologymanager] Topology Admit Handler Jun 02 15:48:57 minikube kubelet[2418]: I0602 15:48:57.796037 2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-nq8k7" (UniqueName: "kubernetes.io/secret/2e63b7b0-0967-48eb-8102-f10965defe2b-default-token-nq8k7") pod "cloud-run-secrets-8468b47865-sm65v" (UID: "2e63b7b0-0967-48eb-8102-f10965defe2b") Jun 02 15:48:57 minikube kubelet[2418]: I0602 15:48:57.796106 2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "gcp-creds" (UniqueName: "kubernetes.io/host-path/2e63b7b0-0967-48eb-8102-f10965defe2b-gcp-creds") pod "cloud-run-secrets-8468b47865-sm65v" (UID: "2e63b7b0-0967-48eb-8102-f10965defe2b") Jun 02 15:48:57 minikube kubelet[2418]: I0602 15:48:57.796129 2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-development-credential" (UniqueName: "kubernetes.io/secret/2e63b7b0-0967-48eb-8102-f10965defe2b-local-development-credential") pod "cloud-run-secrets-8468b47865-sm65v" (UID: "2e63b7b0-0967-48eb-8102-f10965defe2b") Jun 02 15:48:58 minikube kubelet[2418]: W0602 15:48:58.248724 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/cloud-run-secrets-8468b47865-sm65v through plugin: invalid network status for Jun 02 15:48:58 minikube kubelet[2418]: W0602 15:48:58.937038 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/cloud-run-secrets-8468b47865-sm65v through plugin: invalid network status for Jun 02 15:49:04 minikube kubelet[2418]: W0602 15:49:04.554883 2418 sysinfo.go:203] Nodes topology is not available, providing CPU topology Jun 02 15:49:04 minikube kubelet[2418]: W0602 15:49:04.555976 2418 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory Jun 02 15:51:53 minikube kubelet[2418]: I0602 15:51:53.643003 2418 log.go:181] http: superfluous response.WriteHeader call from k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader (httplog.go:217) Jun 02 15:51:59 minikube kubelet[2418]: W0602 15:51:59.272923 2418 pod_container_deletor.go:79] Container "cf3f0627fe96de7ad43f9b3f6a0c00a2a53df64bb0345dd46cdf1d474ffa95cd" not found in pod's containers Jun 02 15:52:00 minikube kubelet[2418]: I0602 15:52:00.336536 2418 reconciler.go:196] operationExecutor.UnmountVolume started for volume "local-development-credential" (UniqueName: "kubernetes.io/secret/2e63b7b0-0967-48eb-8102-f10965defe2b-local-development-credential") pod "2e63b7b0-0967-48eb-8102-f10965defe2b" (UID: "2e63b7b0-0967-48eb-8102-f10965defe2b") Jun 02 15:52:00 minikube kubelet[2418]: I0602 15:52:00.336612 2418 reconciler.go:196] operationExecutor.UnmountVolume started for volume "gcp-creds" (UniqueName: "kubernetes.io/host-path/2e63b7b0-0967-48eb-8102-f10965defe2b-gcp-creds") pod "2e63b7b0-0967-48eb-8102-f10965defe2b" (UID: "2e63b7b0-0967-48eb-8102-f10965defe2b") Jun 02 15:52:00 minikube kubelet[2418]: I0602 15:52:00.336655 2418 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-nq8k7" (UniqueName: "kubernetes.io/secret/2e63b7b0-0967-48eb-8102-f10965defe2b-default-token-nq8k7") pod "2e63b7b0-0967-48eb-8102-f10965defe2b" (UID: "2e63b7b0-0967-48eb-8102-f10965defe2b") Jun 02 15:52:00 minikube kubelet[2418]: I0602 15:52:00.336996 2418 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e63b7b0-0967-48eb-8102-f10965defe2b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "2e63b7b0-0967-48eb-8102-f10965defe2b" (UID: "2e63b7b0-0967-48eb-8102-f10965defe2b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 02 15:52:00 minikube kubelet[2418]: I0602 15:52:00.339492 2418 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e63b7b0-0967-48eb-8102-f10965defe2b-default-token-nq8k7" (OuterVolumeSpecName: "default-token-nq8k7") pod "2e63b7b0-0967-48eb-8102-f10965defe2b" (UID: "2e63b7b0-0967-48eb-8102-f10965defe2b"). InnerVolumeSpecName "default-token-nq8k7". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 02 15:52:00 minikube kubelet[2418]: I0602 15:52:00.339519 2418 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e63b7b0-0967-48eb-8102-f10965defe2b-local-development-credential" (OuterVolumeSpecName: "local-development-credential") pod "2e63b7b0-0967-48eb-8102-f10965defe2b" (UID: "2e63b7b0-0967-48eb-8102-f10965defe2b"). InnerVolumeSpecName "local-development-credential". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 02 15:52:00 minikube kubelet[2418]: I0602 15:52:00.436991 2418 reconciler.go:319] Volume detached for volume "local-development-credential" (UniqueName: "kubernetes.io/secret/2e63b7b0-0967-48eb-8102-f10965defe2b-local-development-credential") on node "minikube" DevicePath "" Jun 02 15:52:00 minikube kubelet[2418]: I0602 15:52:00.437036 2418 reconciler.go:319] Volume detached for volume "gcp-creds" (UniqueName: "kubernetes.io/host-path/2e63b7b0-0967-48eb-8102-f10965defe2b-gcp-creds") on node "minikube" DevicePath "" Jun 02 15:52:00 minikube kubelet[2418]: I0602 15:52:00.437046 2418 reconciler.go:319] Volume detached for volume "default-token-nq8k7" (UniqueName: "kubernetes.io/secret/2e63b7b0-0967-48eb-8102-f10965defe2b-default-token-nq8k7") on node "minikube" DevicePath "" Jun 02 15:52:01 minikube kubelet[2418]: W0602 15:52:01.663471 2418 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/2e63b7b0-0967-48eb-8102-f10965defe2b/volumes" does not exist Jun 02 15:52:04 minikube kubelet[2418]: I0602 15:52:04.538679 2418 scope.go:95] [topologymanager] RemoveContainer - Container ID: 44f04b9e4b287984a3318cbb27c6eb2f80a8496998e5c8b307560a612e477e64 Jun 02 15:54:04 minikube kubelet[2418]: W0602 15:54:04.345934 2418 sysinfo.go:203] Nodes topology is not available, providing CPU topology Jun 02 15:54:04 minikube kubelet[2418]: W0602 15:54:04.347384 2418 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory Jun 02 15:59:04 minikube kubelet[2418]: W0602 15:59:04.141387 2418 sysinfo.go:203] Nodes topology is not available, providing CPU topology Jun 02 15:59:04 minikube kubelet[2418]: W0602 15:59:04.142122 2418 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory Jun 02 16:00:24 minikube kubelet[2418]: I0602 16:00:24.519178 2418 topology_manager.go:187] [topologymanager] Topology Admit Handler Jun 02 16:00:24 minikube kubelet[2418]: I0602 16:00:24.718705 2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "gcp-creds" (UniqueName: "kubernetes.io/host-path/7a58b526-018e-4e15-b1f0-c0b7cc438c43-gcp-creds") pod "cloud-run-secrets-54f79967db-bkthh" (UID: "7a58b526-018e-4e15-b1f0-c0b7cc438c43") Jun 02 16:00:24 minikube kubelet[2418]: I0602 16:00:24.718801 2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-nq8k7" (UniqueName: "kubernetes.io/secret/7a58b526-018e-4e15-b1f0-c0b7cc438c43-default-token-nq8k7") pod "cloud-run-secrets-54f79967db-bkthh" (UID: "7a58b526-018e-4e15-b1f0-c0b7cc438c43") Jun 02 16:00:24 minikube kubelet[2418]: I0602 16:00:24.718830 2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-development-credential" (UniqueName: "kubernetes.io/secret/7a58b526-018e-4e15-b1f0-c0b7cc438c43-local-development-credential") pod "cloud-run-secrets-54f79967db-bkthh" (UID: "7a58b526-018e-4e15-b1f0-c0b7cc438c43") Jun 02 16:00:25 minikube kubelet[2418]: W0602 16:00:25.125598 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/cloud-run-secrets-54f79967db-bkthh through plugin: invalid network status for Jun 02 16:00:25 minikube kubelet[2418]: W0602 16:00:25.772624 2418 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/cloud-run-secrets-54f79967db-bkthh through plugin: invalid network status for * * ==> storage-provisioner [8f392e51dea9] <== * I0602 15:39:23.415249 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0602 15:39:23.427490 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": x509: certificate signed by unknown authority * * ==> storage-provisioner [f2523d5d96d7] <== * I0602 15:39:39.279536 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0602 15:39:39.291638 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0602 15:39:39.291763 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0602 15:39:39.306374 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0602 15:39:39.306498 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66e00e68-7cce-4363-b0e9-95f459e622c7", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_399f40bc-d218-480c-8a7b-571d23ceaac1 became leader I0602 15:39:39.306575 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_399f40bc-d218-480c-8a7b-571d23ceaac1! I0602 15:39:39.407032 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_399f40bc-d218-480c-8a7b-571d23ceaac1! ```
sharifelgamal commented 3 years ago

Yeah, we should absolutely fix this. It will need a fix in https://github.com/googlecontainertools/gcp-auth-webhook