Closed matthewmichihara closed 3 years ago
@sharifelgamal were you able to reproduce the original issue?
Yeah the original issue was reliably reproducible. The newest version of the webhook image should never apply anything to any pod in the kube-system namespace, so I'm wondering if you're getting that image or not.
Could you make sure you have the newest gcp-auth-webhook image deployed?
kubectl get pods -n gcp-auth
should give you three pods, could you run kubectl describe pod <pod-name> -n gcp-auth
on the first of the 3 pods listed?
yes, here's that output:
$ kubectl get pods -n gcp-auth
NAME READY STATUS RESTARTS AGE
gcp-auth-74f9689fd7-lfln7 1/1 Running 0 31m
gcp-auth-certs-create-tbxpd 0/1 Completed 0 34m
gcp-auth-certs-patch-crn8z 0/1 Completed 0 34m
$ kubectl describe pod gcp-auth-74f9689fd7-lfln7 -n gcp-auth
Name: gcp-auth-74f9689fd7-lfln7
Namespace: gcp-auth
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 05 Oct 2020 09:52:40 -0700
Labels: app=gcp-auth
kubernetes.io/minikube-addons=gcp-auth
pod-template-hash=74f9689fd7
Annotations: <none>
Status: Running
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
Controlled By: ReplicaSet/gcp-auth-74f9689fd7
Containers:
gcp-auth:
Container ID: docker://c9af8a135eba8891872322258f5d7aa6f2290f4d3a09b4984d5ef1b261eec0d0
Image: gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.3
Image ID: docker-pullable://gcr.io/k8s-minikube/gcp-auth-webhook@sha256:af4ba05354a42a4e93ad27209c64eba0e004e3265345fc5267d78f89a7bffda8
Port: 8443/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 05 Oct 2020 09:52:43 -0700
Ready: True
Restart Count: 0
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /google-app-creds.json
PROJECT_ID: chelseamarket
GCP_PROJECT: chelseamarket
GCLOUD_PROJECT: chelseamarket
GOOGLE_CLOUD_PROJECT: chelseamarket
CLOUDSDK_CORE_PROJECT: chelseamarket
Mounts:
/etc/webhook/certs from webhook-certs (ro)
/google-app-creds.json from gcp-creds (ro)
/var/lib/minikube/google_cloud_project from gcp-project (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-t66wm (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-certs:
Type: Secret (a volume populated by a Secret)
SecretName: gcp-auth-certs
Optional: false
gcp-project:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_cloud_project
HostPathType: File
default-token-t66wm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-t66wm
Optional: false
gcp-creds:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_application_credentials.json
HostPathType: File
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31m default-scheduler Successfully assigned gcp-auth/gcp-auth-74f9689fd7-lfln7 to minikube
Normal Pulling 31m kubelet Pulling image "gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.3"
Normal Pulled 31m kubelet Successfully pulled image "gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.3" in 2.323861663s
Normal Created 31m kubelet Created container gcp-auth
Normal Started 31m kubelet Started container gcp-auth
$ kubectl describe pod gcp-auth-certs-create-tbxpd -n gcp-auth
Name: gcp-auth-certs-create-tbxpd
Namespace: gcp-auth
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 05 Oct 2020 09:49:23 -0700
Labels: controller-uid=d09d9c87-b723-443c-b22b-d613dd144d03
job-name=gcp-auth-certs-create
Annotations: <none>
Status: Succeeded
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: Job/gcp-auth-certs-create
Containers:
create:
Container ID: docker://0d8db80378ac7a2634cf289be715a03201d92da9d49a7255afae328621c5cb19
Image: jettech/kube-webhook-certgen:v1.3.0
Image ID: docker-pullable://jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689
Port: <none>
Host Port: <none>
Args:
create
--host=gcp-auth,gcp-auth.gcp-auth,gcp-auth.gcp-auth.svc
--namespace=gcp-auth
--secret-name=gcp-auth-certs
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 05 Oct 2020 09:49:28 -0700
Finished: Mon, 05 Oct 2020 09:49:28 -0700
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from minikube-gcp-auth-certs-token-8rz9b (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
minikube-gcp-auth-certs-token-8rz9b:
Type: Secret (a volume populated by a Secret)
SecretName: minikube-gcp-auth-certs-token-8rz9b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35m default-scheduler Successfully assigned gcp-auth/gcp-auth-certs-create-tbxpd to minikube
Normal Pulling 35m kubelet Pulling image "jettech/kube-webhook-certgen:v1.3.0"
Normal Pulled 35m kubelet Successfully pulled image "jettech/kube-webhook-certgen:v1.3.0" in 5.05384679s
Normal Created 35m kubelet Created container create
Normal Started 35m kubelet Started container create
Normal SandboxChanged 35m kubelet Pod sandbox changed, it will be killed and re-created.
$ kubectl describe pod gcp-auth-certs-patch-crn8z -n gcp-auth
Name: gcp-auth-certs-patch-crn8z
Namespace: gcp-auth
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 05 Oct 2020 09:49:23 -0700
Labels: controller-uid=badb6525-ff4a-4fcc-ba42-407b5085d794
job-name=gcp-auth-certs-patch
Annotations: <none>
Status: Succeeded
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
Controlled By: Job/gcp-auth-certs-patch
Containers:
patch:
Container ID: docker://feb4ab880c44298bc5733408dc34b4da960150ef397f8ad76a21b76719e12743
Image: jettech/kube-webhook-certgen:v1.3.0
Image ID: docker-pullable://jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689
Port: <none>
Host Port: <none>
Args:
patch
--secret-name=gcp-auth-certs
--namespace=gcp-auth
--patch-validating=false
--webhook-name=gcp-auth-webhook-cfg
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 05 Oct 2020 09:49:30 -0700
Finished: Mon, 05 Oct 2020 09:49:30 -0700
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from minikube-gcp-auth-certs-token-8rz9b (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
minikube-gcp-auth-certs-token-8rz9b:
Type: Secret (a volume populated by a Secret)
SecretName: minikube-gcp-auth-certs-token-8rz9b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35m default-scheduler Successfully assigned gcp-auth/gcp-auth-certs-patch-crn8z to minikube
Normal Pulling 35m kubelet Pulling image "jettech/kube-webhook-certgen:v1.3.0"
Normal Pulled 35m kubelet Successfully pulled image "jettech/kube-webhook-certgen:v1.3.0" in 6.839498595s
Normal Created 35m kubelet Created container patch
Normal Started 35m kubelet Started container patch
Normal SandboxChanged 35m kubelet Pod sandbox changed, it will be killed and re-created.
I tried again and .. it worked?
$ ./minikube version
minikube version: v1.13.1
commit: bc3db0d76816d4a8068b9a7796def3c7572cb595
$ ./minikube delete --all --purge
🔥 Successfully deleted all profiles
💀 Successfully purged minikube directory located at - [/Users/michihara/.minikube]
$ docker system prune --all
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all images without at least one container associated to them
- all build cache
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
// restarted docker desktop here
$ ./minikube start
😄 minikube v1.13.1 on Darwin 10.15.7
✨ Automatically selected the docker driver
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.19.2 preload ...
> preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 486.36 MiB
🔥 Creating docker container (CPUs=2, Memory=3892MB) ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" by default
$ ./minikube addons enable gcp-auth
🔎 Verifying gcp-auth addon...
📌 Your GCP credentials will now be mounted into every pod created in the minikube cluster.
📌 If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
🌟 The 'gcp-auth' addon is enabled
Did additional stop / start
and things look like they are working. Not sure what was causing the original issue - perhaps the restart of Docker Desktop changed something?
$ ./minikube stop
✋ Stopping node "minikube" ...
🛑 Powering off "minikube" via SSH ...
🛑 1 nodes stopped.
$ ./minikube start
😄 minikube v1.13.1 on Darwin 10.15.7
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎 Verifying Kubernetes components...
🔎 Verifying gcp-auth addon...
📌 Your GCP credentials will now be mounted into every pod created in the minikube cluster.
📌 If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
🌟 Enabled addons: storage-provisioner, default-storageclass, gcp-auth
🏄 Done! kubectl is now configured to use "minikube" by default
Downloaded a fresh minikube build from master
, and gave it another try:
$ curl -Lo minikube https://storage.googleapis.com/minikube-builds/master/minikube-darwin-amd64 && chmod +x minikube
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 55.6M 100 55.6M 0 0 23.1M 0 0:00:02 0:00:02 --:--:-- 23.1M
$ ./minikube version
minikube version: v1.13.1
commit: aae778430915035086fa26a69ee74d29babebbb4
$ ./minikube start
😄 minikube v1.13.1 on Darwin 10.15.7
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🏃 Updating the running docker "minikube" container ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎 Verifying Kubernetes components...
❗ Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
serviceaccount/storage-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
endpoints/k8s.io-minikube-hostpath unchanged
stderr:
Error from server (InternalError): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"gcp-auth-skip-secret\":\"true\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v3\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n"},"labels":{"gcp-auth-skip-secret":"true"}},"spec":{"$setElementOrder/containers":[{"name":"storage-provisioner"}],"$setElementOrder/volumes":[{"name":"tmp"}],"containers":[{"$setElementOrder/volumeMounts":[{"mountPath":"/tmp"}],"name":"storage-provisioner"}]}}
to:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
for: "/etc/kubernetes/addons/storage-provisioner.yaml": Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.114.51:443: connect: connection refused
]
🔎 Verifying gcp-auth addon...
❗ Enabling 'gcp-auth' returned an error: running callbacks: [verifying gcp-auth addon pods : timed out waiting for the condition: timed out waiting for the condition]
🌟 Enabled addons: default-storageclass
🏄 Done! kubectl is now configured to use "minikube" by default
I then stopped minikube, did some cleaning, and tried again:
$ ./minikube stop
✋ Stopping node "minikube" ...
🛑 Powering off "minikube" via SSH ...
./minikube delete --all --purge
🛑 1 nodes stopped.
$ ./minikube delete --all --purge
🔥 Deleting "minikube" in docker ...
🔥 Removing /Users/michihara/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
💀 Successfully purged minikube directory located at - [/Users/michihara/.minikube]
$ docker system prune --all
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all images without at least one container associated to them
- all build cache
Are you sure you want to continue? [y/N] y
Deleted Containers:
a2b3e0965d9a4d29a7a295df472872604e4eb6d9867b92e4da1e5bd835dfe6f0
0be7db88bfbe0d076763c03597188cc6fa8b6a61941b0fadddbc169947b1538c
08201d6cc3a1d07c36083cb5789256e6bef19bc1c6c0cf464893178150784623
Deleted Images:
untagged: docker/desktop-kubernetes:kubernetes-v1.16.5-cni-v0.7.5-critools-v1.15.0
untagged: docker/desktop-kubernetes@sha256:023b5fbc1f50ef1ba0c6f1c4c994d7242ccaab7f6f3ddf7934ce0517049b9708
deleted: sha256:a86647f0b376be9a76eafa9d3bec4e30e3b3aeadce9d50326e07e748918537ca
deleted: sha256:20ba2a46c5a5e9f01eed477c5da2e7021fdfee85cde6bb220aea6728a60ca00a
untagged: docker/kube-compose-installer:v0.4.25-alpha1
untagged: docker/kube-compose-installer@sha256:b82322c40b240a417fa1ec1cd8030b5a65a3693aafc37a970db07424333d8cce
deleted: sha256:2a71ac5a1359656a5a1f2ac4a3be95238bdba9ac52c6ad062a245d5fde1eae52
deleted: sha256:59cba1c3adf39b06967b5de99c33aa17a67228df7ab39ee6d037d9555ef6e68e
untagged: gcr.io/k8s-minikube/kicbase:v0.0.13-snapshot1
untagged: gcr.io/k8s-minikube/kicbase@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f
deleted: sha256:90f1294ff9acdd24b50195f33fc48f1f4cc50786328bf6cfc095270df92f1c36
deleted: sha256:f1b5bf1489776f0164d979016b523b648d12c1257bfb1500563e052e9a286951
deleted: sha256:130d60ac7a2abcf00058e0d198d7ae3100f64c82770156ea373b120bd57bfdc0
deleted: sha256:ae9fa1c9c0b084125a2855454956bed0985e40ab2ed4a7508e1202ffdc2d4e0b
deleted: sha256:0674401420c89b94a6bb585936d62262cb4138c4c89d0a9026f5e61923985fd8
deleted: sha256:43c3f7071a445154468a5e51f86f9764bf6913f2e94d3f5f8f83dd053f7cee9e
deleted: sha256:0ee6a8dd60f8b4d44c94ebbf57d9ab5c809bbdb924ea83e76a72ad76ea7e0882
deleted: sha256:6624cf78734b1e94e2312bf56322dd6d2b841da1da61009799b1020f1fd93ff5
deleted: sha256:2421d68d542258cccbc66e5d3712535aff9a3e2b0c82a2597839a6ceebb30bf2
deleted: sha256:9a06a306246ba43dde0766d80dca0df27c2d34e78fef1cec9a2c983f3115e190
deleted: sha256:a340aa55b5b1ea1bdd9849477e850cd01b224d80cf0a834bcf86f0ef12610f60
deleted: sha256:250600bbfb9a7de4d79a3ec67776b51d40a1d196ef34965bec21f466be27d766
deleted: sha256:26a7d2f77f7027fec1ec07a98c8cb92818d1e61688de8d0436d780011f7b7b9e
deleted: sha256:ffaece20c2bef05335f9d58980cffb6d3d72469b84b06097bc0097c571886628
deleted: sha256:be4515c372095d88494b48c16a4d735e15cffd7b7bb31912147a87a04b206729
deleted: sha256:e76ef2de99464431aa65b3b5371ad74c85f301ecde5fe3dfbb704dc350fc767e
deleted: sha256:2d9ba679dcf587e9b5b520177f084d6cb6a4d3e95bd550df5a0c8bdf7512d615
deleted: sha256:6ad41196a22310cd591281ca6025869e1ba639e3c3d57133c2571c88ccf8a746
deleted: sha256:d61bbd529b5beab1623ab4703aa0baf06a7a012e8d6ed78b5c809aedcb60a2e4
deleted: sha256:58dfea8eb02315c64b359fed660832f277db5692492ca5a696ae15296fd08b99
deleted: sha256:7aee877aa7d1763ef764b43cb864ee522c5c4ac34ff5f9750ee466d399420982
deleted: sha256:f814e44948c4dbfbf95b480cecbcc948f7fb0e5eb37fa8dcfc9441c669bc8a5b
deleted: sha256:214f32351cbb29bbb99944130af05fbfebf7915cef600e2ed930e96d19f23643
deleted: sha256:c876a46df158e8207a282784ff347a9ee9c0551add9e532c3131da849b989059
deleted: sha256:b666e4b2b2d98d0a1b8ffcc06b9498ec53959b3fa29212d21fe2d1a85e032a05
deleted: sha256:422d4b7c46b6ffc30c36d6e8c23e2a88e03fb7248ac8125e42c9e4225d4279bf
deleted: sha256:279e836b58d9996b5715e82a97b024563f2b175e86a53176846684f0717661c3
deleted: sha256:39865913f677c50ea236b68d81560d8fefe491661ce6e668fd331b4b680b1d47
deleted: sha256:cac81188485e011e56459f1d9fc9936625a1b62cacdb4fcd3526e5f32e280387
deleted: sha256:7789f1a3d4e9258fbe5469a8d657deb6aba168d86967063e9b80ac3e1154333f
Total reclaimed space: 1.115GB
// Restarted docker desktop here
$ ./minikube status
🤷 There is no local cluster named "minikube"
👉 To fix this, run: "minikube start"
$ ./minikube start
😄 minikube v1.13.1 on Darwin 10.15.7
✨ Automatically selected the docker driver
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.19.2 preload ...
> preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 486.36 MiB
🔥 Creating docker container (CPUs=2, Memory=3892MB) ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" by default
$ ./minikube addons enable gcp-auth
🔎 Verifying gcp-auth addon...
📌 Your GCP credentials will now be mounted into every pod created in the minikube cluster.
📌 If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
🌟 The 'gcp-auth' addon is enabled
$ ./minikube addons disable gcp-auth
🌑 "The 'gcp-auth' addon is disabled
$ ./minikube stop
✋ Stopping node "minikube" ...
🛑 Powering off "minikube" via SSH ...
🛑 1 nodes stopped.
$ ./minikube start --addons gcp-auth
😄 minikube v1.13.1 on Darwin 10.15.7
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎 Verifying Kubernetes components...
🔎 Verifying gcp-auth addon...
❗ Enabling 'gcp-auth' returned an error: running callbacks: [verifying gcp-auth addon pods : timed out waiting for the condition: timed out waiting for the condition]
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" by default
Minikube seemed to get stuck again and error at:
🔎 Verifying gcp-auth addon...
❗ Enabling 'gcp-auth' returned an error: running callbacks: [verifying gcp-auth addon pods : timed out waiting for the condition: timed out waiting for the condition]
Some kubectl output:
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
gcp-auth gcp-auth-5ff8987f65-qh2l5 0/1 ContainerCreating 0 8m42s
kube-system coredns-f9fd979d6-p5n27 1/1 Running 1 10m
kube-system etcd-minikube 1/1 Running 1 11m
kube-system kube-apiserver-minikube 1/1 Running 1 11m
kube-system kube-controller-manager-minikube 1/1 Running 1 11m
kube-system kube-proxy-9frg4 1/1 Running 1 10m
kube-system kube-scheduler-minikube 1/1 Running 1 11m
kube-system storage-provisioner 1/1 Running 2 11m
$ kubectl describe deploy gcp-auth -n gcp-auth
Name: gcp-auth
Namespace: gcp-auth
CreationTimestamp: Tue, 06 Oct 2020 11:02:14 -0700
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=gcp-auth
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=gcp-auth
gcp-auth-skip-secret=true
kubernetes.io/minikube-addons=gcp-auth
Containers:
gcp-auth:
Image: gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.3
Port: 8443/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/etc/webhook/certs from webhook-certs (ro)
/var/lib/minikube/google_cloud_project from gcp-project (ro)
Volumes:
webhook-certs:
Type: Secret (a volume populated by a Secret)
SecretName: gcp-auth-certs
Optional: false
gcp-project:
Type: HostPath (bare host directory volume)
Path: /var/lib/minikube/google_cloud_project
HostPathType: File
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: gcp-auth-5ff8987f65 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 9m42s deployment-controller Scaled up replica set gcp-auth-5ff8987f65 to 1
./minikube logs
output:
$ kubectl get event -A | grep gcp-auth
gcp-auth 21m Normal Scheduled pod/gcp-auth-5ff8987f65-qh2l5 Successfully assigned gcp-auth/gcp-auth-5ff8987f65-qh2l5 to minikube
gcp-auth 39s Warning FailedMount pod/gcp-auth-5ff8987f65-qh2l5 MountVolume.SetUp failed for volume "webhook-certs" : secret "gcp-auth-certs" not found
gcp-auth 19m Warning FailedMount pod/gcp-auth-5ff8987f65-qh2l5 Unable to attach or mount volumes: unmounted volumes=[webhook-certs], unattached volumes=[default-token-hcxk7 webhook-certs gcp-project]: timed out waiting for the condition
gcp-auth 62s Warning FailedMount pod/gcp-auth-5ff8987f65-qh2l5 Unable to attach or mount volumes: unmounted volumes=[webhook-certs], unattached volumes=[webhook-certs gcp-project default-token-hcxk7]: timed out waiting for the condition
gcp-auth 3m16s Warning FailedMount pod/gcp-auth-5ff8987f65-qh2l5 Unable to attach or mount volumes: unmounted volumes=[webhook-certs], unattached volumes=[gcp-project default-token-hcxk7 webhook-certs]: timed out waiting for the condition
gcp-auth 21m Normal SuccessfulCreate replicaset/gcp-auth-5ff8987f65 Created pod: gcp-auth-5ff8987f65-qh2l5
gcp-auth 4m27s Warning FailedCreate job/gcp-auth-certs-create Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused
gcp-auth 4m27s Warning FailedCreate job/gcp-auth-certs-patch Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused
gcp-auth 21m Normal ScalingReplicaSet deployment/gcp-auth Scaled up replica set gcp-auth-5ff8987f65 to 1
I'm hoping #9406 will fix this issue.
I'm hoping #9406 will fix this issue.
@sharifelgamal Ran a bunch of different combinations of stopping/starting minikube with gcp-auth
and haven't seen the issues anyone. I think it fixes it for me!
I'm running a build with the fix for https://github.com/kubernetes/minikube/issues/9371, but still encountering this error:
Optional: Full output of
minikube logs
command: