kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.6k stars 4.89k forks source link

Minikube/gcp-auth sometimes does not set GCP project environment variables #11109

Closed matthewmichihara closed 3 years ago

matthewmichihara commented 3 years ago

This came up while investigating https://stackoverflow.com/questions/67083909/project-env-vars-like-gcp-project-cloudsdk-core-project-are-no-longer-set-whe?noredirect=1#comment118606601_67083909

The user is using Cloud Code for IntelliJ's Cloud Run local development feature, which internally uses minikube to run the Cloud Run app locally. Sometimes it has been noticed that the GCP project environment variables don't get set, despite the gcp-auth addon being successfully enabled.

Environment variables that sometimes don't get set:

Once in this state, the environment variables seem to never get set. Running minikube delete --all --purge seems to fix the issue though.

Optional: Full output of minikube logs command:

``` ==> Docker <== -- Logs begin at Tue 2021-04-13 18:04:00 UTC, end at Thu 2021-04-15 22:13:21 UTC. -- Apr 15 18:58:52 cloud-run-dev-internal dockerd[214]: time="2021-04-15T18:58:52.587589900Z" level=info msg="ignoring event" container=7105ac706db53cfd797ba40997cf3ef3f7f0c48479673fbf8d60a6252b665ff4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 18:59:20 cloud-run-dev-internal dockerd[214]: time="2021-04-15T18:59:20.615571600Z" level=info msg="ignoring event" container=306dcf8f58a49bb20ea4778bccb9499404d96651e7ed24b87f2dd5bec5ef7485 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 19:00:15 cloud-run-dev-internal dockerd[214]: time="2021-04-15T19:00:15.741031600Z" level=info msg="ignoring event" container=93fb72b812ecd9b759e027417c9ac02be331da00c1acbfc4b77220ed557eeda7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 19:00:20 cloud-run-dev-internal dockerd[214]: time="2021-04-15T19:00:20.811287900Z" level=info msg="ignoring event" container=9fc271881548adbf3e7ed2a39f461b8bffab95a087a5368125269637c34f7a21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 19:00:22 cloud-run-dev-internal dockerd[214]: time="2021-04-15T19:00:22.798261100Z" level=info msg="ignoring event" container=3152afb2c6882487612f6eb0b26317b4e2c5bae1eb277915c7069872feacccc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 19:00:22 cloud-run-dev-internal dockerd[214]: time="2021-04-15T19:00:22.888462700Z" level=info msg="ignoring event" container=b1aace4bc4a0199437a97fd1eff6ab520de8cf3287b7451f474beaec6ef6588e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:34:08 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:34:08.860651500Z" level=info msg="ignoring event" container=4e0298905f457cacb6e6c64cfd6fa56b8a2f85ebcb67b4883abb062dde838ed5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:34:49 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:34:49.704961100Z" level=info msg="ignoring event" container=75daf484ca1c995d107d07baf76d9f1d537b82e29d08c2163402eae35ba77c49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:34:49 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:34:49.760895000Z" level=info msg="ignoring event" container=17c63c7a0ec097a63a0a5d4263f4f54bc8c701ed02bc2a27486f33429e03f958 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:34:50 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:34:50.514617900Z" level=info msg="ignoring event" container=16a8bc20bb09f936d5d81a2f0ff75d748fe8193b0ca4ee78c1e14c26ff09fe7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:34:50 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:34:50.737675100Z" level=info msg="ignoring event" container=25e0bc49951b6c0810ee1ca67880d74e3c2b443188b5d54dd562ebe2f23758fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:34:51 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:34:51.494898300Z" level=info msg="ignoring event" container=bd021dbbed0a6f621503f2cdc38deec9853b20800b72f03d7a39a2d2202ba8da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:35:03 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:35:03.449654700Z" level=info msg="ignoring event" container=5a22cd4b890f2adb11e3bfa9e2f2091311821273dbc47ccb636651a95decfd05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:35:03 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:35:03.778833400Z" level=info msg="Layer sha256:b94bc03a0ca50bda6a4108d76c6765b0e6369b794d53d38dc0460f593876d46a cleaned up" Apr 15 20:35:03 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:35:03.916380100Z" level=info msg="Layer sha256:b94bc03a0ca50bda6a4108d76c6765b0e6369b794d53d38dc0460f593876d46a cleaned up" Apr 15 20:35:26 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:35:26.771984500Z" level=info msg="Container 9a18dbaa26faaeeaf58d966e623739c54bbf977965b6dea19c262be2d2852cdc failed to exit within 2 seconds of signal 15 - using the force" Apr 15 20:35:26 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:35:26.928476200Z" level=info msg="Container 9a18dbaa26faaeeaf58d966e623739c54bbf977965b6dea19c262be2d2852cdc failed to exit within 2 seconds of signal 15 - using the force" Apr 15 20:35:27 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:35:27.384969900Z" level=info msg="ignoring event" container=9a18dbaa26faaeeaf58d966e623739c54bbf977965b6dea19c262be2d2852cdc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:35:27 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:35:27.783177700Z" level=info msg="ignoring event" container=8728661c14846caa94f49420149b6251cf8d9e176463429c7a5e27f4d327906d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:35:30 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:35:30.294779900Z" level=info msg="ignoring event" container=cd05e4d7b3d52696845023fdf8c5b26676347945e0c41b95f3c5dbd0001b2796 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:35:30 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:35:30.585950900Z" level=info msg="ignoring event" container=40da6859b8120b53b7f629eedc95415f8a3d2cef6996da5afc5d0ea79f7379f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:48:17 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:48:17.517240000Z" level=info msg="ignoring event" container=0882f877f4b9f1d630eba557301f9e94af9d6576fb638f11b05d6e30927c84e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:48:17 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:48:17.683953500Z" level=info msg="ignoring event" container=dd71b98040f45d25b8a4f4c1d7ce771c88d5694f9ee9804e5734503ebb3d8955 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:48:17 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:48:17.891135200Z" level=info msg="ignoring event" container=9527e2122d89b7b56a0da02efe9f13fea33266aaafc203acfb8f07f6d33024f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:48:18 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:48:18.105270000Z" level=info msg="ignoring event" container=9240cb952db9aa3b8b42401bb59ef029b2466eaaecf8c3ac3a9d7ca7488e9cc9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:48:18 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:48:18.958823000Z" level=info msg="ignoring event" container=f61701c332a8f377b8ce5831270c560c879793e64007a001de8a0443d4f0a85b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:48:30 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:48:30.358063900Z" level=info msg="ignoring event" container=cb1f19c2c430e5724d7667b153d94b4c249515e33a8df4fdb9580d314be5aca9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:48:30 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:48:30.611299600Z" level=info msg="Layer sha256:3374642baf3a289ec632a0fca078ea808f0566944c4627dde396b10c5299f4ff cleaned up" Apr 15 20:48:30 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:48:30.755485900Z" level=info msg="Layer sha256:3374642baf3a289ec632a0fca078ea808f0566944c4627dde396b10c5299f4ff cleaned up" Apr 15 20:48:37 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:48:37.037923000Z" level=info msg="ignoring event" container=87cdd6d0b316690fc14ad7da079fa00d00aff392382e6e3433294be923f2ad19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:52:16 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:52:16.950853000Z" level=info msg="ignoring event" container=65f62330de8f0fe5e9907c8413839b62ece53f0e890ad1367dbc81765da994d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:52:17 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:52:17.033808400Z" level=info msg="Container ec005e5ff2957750d061d37ce1410a7746b087eded2eb3c1fa415dcf13a1b519 failed to exit within 2 seconds of signal 15 - using the force" Apr 15 20:52:17 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:52:17.039558000Z" level=info msg="ignoring event" container=20e62a537ce4064f6e665b2079c8fe37855fca6df343f004e954c95d9cf5fd7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:52:17 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:52:17.089242500Z" level=info msg="Container ec005e5ff2957750d061d37ce1410a7746b087eded2eb3c1fa415dcf13a1b519 failed to exit within 2 seconds of signal 15 - using the force" Apr 15 20:52:17 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:52:17.142032700Z" level=info msg="ignoring event" container=ec005e5ff2957750d061d37ce1410a7746b087eded2eb3c1fa415dcf13a1b519 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 20:52:17 cloud-run-dev-internal dockerd[214]: time="2021-04-15T20:52:17.262157200Z" level=info msg="ignoring event" container=88b30abd45bb49babcc8c474f746df7e525c7f6113f58eccc37efa79c0be41f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:00:51 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:00:51.993384200Z" level=info msg="ignoring event" container=09310f49a8629db506ebea08043494c106fc7fbca100c634755f70eb2d8e87f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:01:20 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:01:20.933180000Z" level=info msg="ignoring event" container=81e06dd36bbf9869d808487c483e908de20f9051b35ce820a788ecdd371962c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:01:20 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:01:20.956022700Z" level=info msg="ignoring event" container=9824ae1496ce9319a00d82b19f8f8c83cc1d48682cdacbf644b385896074752c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:01:21 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:01:21.843486500Z" level=info msg="ignoring event" container=f8f6841acf2a69bccb141cdb37d840eed3c075a19b0e1d0f2d23c05013821648 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:01:21 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:01:21.861292300Z" level=info msg="ignoring event" container=b4bffb94569412668d091fce007105424a0660b6df1e3183b542ed53cee97262 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:01:35 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:01:35.338845800Z" level=info msg="ignoring event" container=abf8234fbfc555238bde08e0b38272b4453ab5cf03c8e0a9d6144aa96cadb470 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:01:35 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:01:35.629535500Z" level=info msg="Layer sha256:f719bc266737e07992625edd4c96a73b300d106343994f95d420e41d918b99cf cleaned up" Apr 15 22:01:35 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:01:35.749100300Z" level=info msg="Layer sha256:f719bc266737e07992625edd4c96a73b300d106343994f95d420e41d918b99cf cleaned up" Apr 15 22:02:08 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:02:08.721045400Z" level=info msg="ignoring event" container=833f4beed71b6f6689da1548ee383cab36c9fb362f5d45feaccd7a134f4be495 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:02:08 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:02:08.843598200Z" level=info msg="ignoring event" container=d3e8a69286e1536819b4e175e5064a18d471bdac2f5c5a4b945767d0221922bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:02:10 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:02:10.503101700Z" level=info msg="ignoring event" container=ebd308e5e14bed7daec9c24653e3951c271fa29a8ba83daf2a884779e3d3f07c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:02:10 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:02:10.553108900Z" level=info msg="ignoring event" container=74a85103c3a56a7d582f9dc0575d910004122537e32f879bd502535218d99c6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:04:32 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:04:32.540760400Z" level=info msg="ignoring event" container=b8a1ba8b10517253351c57acbfb8f20b0a8cfaf568c32c9f45b3c136b2b4872c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:04:32 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:04:32.583076800Z" level=info msg="ignoring event" container=e7e051afcb3af63a2ccda271ffb053ef44350b85c75d2f669eaae141954ddce5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:04:33 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:04:33.395110900Z" level=info msg="ignoring event" container=13174026e338cff6a6e54bd53dabc8f10642af4435c0d60edcc68fdf0e0c66ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:04:33 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:04:33.706285600Z" level=info msg="ignoring event" container=5925ff7e92ecee6a4027a5b4ed4d9eb85d6409fcd023db0e38af146e3bcc5101 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:04:34 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:04:34.744735900Z" level=info msg="ignoring event" container=528c84dec749d163a9546ae644ec5cae46dfc7f7bf2537bbcb5b827f894ae437 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:04:44 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:04:44.870018900Z" level=info msg="ignoring event" container=8ae16c1947ebced4e1128da230bfde7b050ba8690ec22a2072d77cbe1ff83e78 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:04:48 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:04:48.250258300Z" level=info msg="ignoring event" container=4b7ac91211fc97dacf5a3fe640f9f459d7ec9a6e7a93c40a200e13e5efabba9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:05:03 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:05:03.141138700Z" level=info msg="ignoring event" container=ef8611f71d49359cf47fe06b94c598f0558fcd76cfea89c08e582364c9397d18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:05:35 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:05:35.612929800Z" level=info msg="ignoring event" container=2dc3e5457a66fe7464a366b526d1b9614d85d22c5163ad159a7a76e7f852aa1c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:06:21 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:06:21.016744800Z" level=info msg="ignoring event" container=79c694c76e42c738e3a5dd748a32f3d31fb1c56eb905acfd27ebcda2736ac177 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:07:52 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:07:52.461546000Z" level=info msg="ignoring event" container=a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 15 22:10:40 cloud-run-dev-internal dockerd[214]: time="2021-04-15T22:10:40.204340700Z" level=info msg="ignoring event" container=c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID c51064c2bb918 7c25d9b501270 2 minutes ago Exited google-apis-container 6 07d5678ad3b20 07d32ec56ffb5 302c49f0efc65 8 minutes ago Running gcp-auth 0 abd711ff19723 5925ff7e92ece 4d4f44df9f905 8 minutes ago Exited patch 1 528c84dec749d e7e051afcb3af 4d4f44df9f905 8 minutes ago Exited create 0 13174026e338c 4c06bd35c4968 85069258b98ac 12 minutes ago Running storage-provisioner 11 726c18de0ad18 09310f49a8629 85069258b98ac 2 hours ago Exited storage-provisioner 10 726c18de0ad18 70d5a810cfb1c bfe3a36ebd252 5 hours ago Running coredns 2 5630bc792442f faac686fc1d83 43154ddb57a83 5 hours ago Running kube-proxy 2 e5c19ce7089cc 506ebed562dc6 a27166429d98e 5 hours ago Running kube-controller-manager 3 7d9f147c67d20 71a07f2bb9c44 a8c2fdb8bf76e 5 hours ago Running kube-apiserver 2 f8029088896b2 23d4f3481b908 ed2c44fbdd78b 5 hours ago Running kube-scheduler 2 6fe121953bd8a 6640c6ec628a6 0369cf4303ffd 5 hours ago Running etcd 2 cfb3c196674d8 580db1120b10f a27166429d98e 5 hours ago Exited kube-controller-manager 2 7d9f147c67d20 8c4d7dea61b8b bfe3a36ebd252 2 days ago Exited coredns 1 c219c3e599263 c492275438f4c 43154ddb57a83 2 days ago Exited kube-proxy 1 f61dbb35d8491 0f95485cf7e37 ed2c44fbdd78b 2 days ago Exited kube-scheduler 1 616310429e8af 8e934d1e76c8f a8c2fdb8bf76e 2 days ago Exited kube-apiserver 1 e41d523a084e9 a7e27c226fbf5 0369cf4303ffd 2 days ago Exited etcd 1 7bc9dfa5a6462 ==> coredns [70d5a810cfb1] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d ==> coredns [8c4d7dea61b8] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s E0413 18:04:46.450512 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:46.451850 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:46.452390 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:47.607576 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:47.787604 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:48.008668 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:49.718158 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:49.871212 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:50.469352 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:53.385228 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:53.403602 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:04:54.635937 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0413 18:05:02.412714 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E0415 17:27:41.071499 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=3583&timeout=5m23s&timeoutSeconds=323&watch=true": dial tcp 10.96.0.1:443: connect: connection refused E0415 17:27:41.071734 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=3525&timeout=9m52s&timeoutSeconds=592&watch=true": dial tcp 10.96.0.1:443: connect: connection refused E0415 17:27:41.082559 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=3585&timeout=8m2s&timeoutSeconds=482&watch=true": dial tcp 10.96.0.1:443: connect: connection refused ==> describe nodes <== Name: cloud-run-dev-internal Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=cloud-run-dev-internal kubernetes.io/os=linux minikube.k8s.io/commit=09ee84d530de4a92f00f1c5dbc34cead092b95bc minikube.k8s.io/name=cloud-run-dev-internal minikube.k8s.io/updated_at=2021_04_09T13_40_00_0700 minikube.k8s.io/version=v1.18.1 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 09 Apr 2021 17:39:57 +0000 Taints: Unschedulable: false Lease: HolderIdentity: cloud-run-dev-internal AcquireTime: RenewTime: Thu, 15 Apr 2021 22:13:20 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 15 Apr 2021 22:09:42 +0000 Thu, 15 Apr 2021 20:48:15 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 15 Apr 2021 22:09:42 +0000 Thu, 15 Apr 2021 20:48:15 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 15 Apr 2021 22:09:42 +0000 Thu, 15 Apr 2021 20:48:15 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 15 Apr 2021 22:09:42 +0000 Thu, 15 Apr 2021 22:04:40 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: cloud-run-dev-internal Capacity: cpu: 4 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4030792Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4030792Ki pods: 110 System Info: Machine ID: 84fb46bd39d2483a97ab4430ee4a5e3a System UUID: 52880e1e-1516-4062-aa18-93175732a326 Boot ID: 820c0289-24b4-49b3-a79b-be245eedbb17 Kernel Version: 5.10.25-linuxkit OS Image: Ubuntu 20.04.1 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.3 Kubelet Version: v1.20.2 Kube-Proxy Version: v1.20.2 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default google-apis-787c97d877-4chzv 0 (0%) 0 (0%) 268435456 (6%) 268435456 (6%) 8m48s gcp-auth gcp-auth-555897b58d-5f8ch 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m1s kube-system coredns-74ff55c5b-tgnqg 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 6d4h kube-system etcd-cloud-run-dev-internal 100m (2%) 0 (0%) 100Mi (2%) 0 (0%) 6d4h kube-system kube-apiserver-cloud-run-dev-internal 250m (6%) 0 (0%) 0 (0%) 0 (0%) 6d4h kube-system kube-controller-manager-cloud-run-dev-internal 200m (5%) 0 (0%) 0 (0%) 0 (0%) 6d4h kube-system kube-proxy-lchj5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d4h kube-system kube-scheduler-cloud-run-dev-internal 100m (2%) 0 (0%) 0 (0%) 0 (0%) 6d4h kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d4h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (18%) 0 (0%) memory 446693376 (10%) 446693376 (10%) ephemeral-storage 100Mi (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 12m kubelet Starting kubelet. Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 12m (x8 over 12m) kubelet Node cloud-run-dev-internal status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 12m (x8 over 12m) kubelet Node cloud-run-dev-internal status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 12m (x7 over 12m) kubelet Node cloud-run-dev-internal status is now: NodeHasSufficientPID Normal Starting 8m58s kubelet Starting kubelet. Normal NodeHasSufficientMemory 8m57s kubelet Node cloud-run-dev-internal status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 8m57s kubelet Node cloud-run-dev-internal status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m57s kubelet Node cloud-run-dev-internal status is now: NodeHasSufficientPID Normal NodeNotReady 8m57s kubelet Node cloud-run-dev-internal status is now: NodeNotReady Normal NodeAllocatableEnforced 8m57s kubelet Updated Node Allocatable limit across pods Normal NodeReady 8m47s kubelet Node cloud-run-dev-internal status is now: NodeReady ==> dmesg <== [ +0.035068] bpfilter: write fail -32 [ +0.038603] bpfilter: write fail -32 [ +0.035149] bpfilter: read fail 0 [ +0.021090] bpfilter: write fail -32 [ +0.026595] bpfilter: write fail -32 [ +0.030434] bpfilter: write fail -32 [ +0.032057] bpfilter: write fail -32 [ +0.027466] bpfilter: read fail 0 [ +0.033660] bpfilter: write fail -32 [ +0.041617] bpfilter: write fail -32 [ +0.042115] bpfilter: write fail -32 [Apr15 22:11] bpfilter: write fail -32 [ +0.024991] bpfilter: write fail -32 [ +1.273989] bpfilter: read fail 0 [ +0.038840] bpfilter: write fail -32 [ +0.033267] bpfilter: read fail 0 [ +0.024482] bpfilter: read fail 0 [ +9.383496] bpfilter: read fail 0 [ +0.026597] bpfilter: write fail -32 [ +0.042756] bpfilter: read fail 0 [ +0.031011] bpfilter: read fail 0 [ +5.565478] bpfilter: read fail 0 [ +0.038733] bpfilter: read fail 0 [ +0.033379] bpfilter: write fail -32 [ +13.485866] bpfilter: read fail 0 [ +0.032348] bpfilter: write fail -32 [ +0.042852] bpfilter: write fail -32 [ +1.221054] bpfilter: write fail -32 [ +0.038534] bpfilter: write fail -32 [Apr15 22:12] bpfilter: read fail 0 [ +0.035179] bpfilter: write fail -32 [ +0.033668] bpfilter: write fail -32 [ +1.229495] bpfilter: write fail -32 [ +0.036275] bpfilter: write fail -32 [ +9.442641] bpfilter: read fail 0 [ +0.025967] bpfilter: write fail -32 [ +0.037771] bpfilter: read fail 0 [ +0.030756] bpfilter: write fail -32 [ +5.572018] bpfilter: read fail 0 [ +0.022055] bpfilter: read fail 0 [ +0.036555] bpfilter: write fail -32 [ +13.497160] bpfilter: write fail -32 [ +0.023869] bpfilter: read fail 0 [ +0.032128] bpfilter: read fail 0 [ +1.239379] bpfilter: read fail 0 [ +0.034242] bpfilter: read fail 0 [ +0.026263] bpfilter: read fail 0 [ +0.032284] bpfilter: write fail -32 [Apr15 22:13] bpfilter: read fail 0 [ +0.032087] bpfilter: read fail 0 [ +0.041682] bpfilter: read fail 0 [ +0.019060] bpfilter: read fail 0 [ +1.204798] bpfilter: write fail -32 [ +0.027298] bpfilter: read fail 0 [ +0.034545] bpfilter: read fail 0 [ +9.420852] bpfilter: write fail -32 [ +0.031776] bpfilter: write fail -32 [ +5.638321] bpfilter: write fail -32 [ +0.037500] bpfilter: read fail 0 [ +0.035519] bpfilter: write fail -32 ==> etcd [6640c6ec628a] <== 2021-04-15 22:02:04.129467 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:02:14.094591 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:04:38.983815 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:04:48.391343 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:04:58.390566 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:05:08.390405 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:05:18.358660 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:05:28.357458 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:05:38.356798 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:05:41.799259 I | mvcc: store.index: compact 8654 2021-04-15 22:05:41.800307 I | mvcc: finished scheduled compaction at 8654 (took 783µs) 2021-04-15 22:05:48.323627 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:05:58.323083 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:06:08.324342 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:06:18.289641 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:06:28.289189 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:06:38.289340 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:06:48.256301 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:06:58.255365 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:07:08.265304 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:07:18.223072 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:07:28.222563 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:07:38.221485 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:07:48.188701 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:07:58.187348 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:08:08.186855 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:08:18.155950 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:08:28.153676 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:08:38.154126 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:08:48.120725 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:08:58.119704 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:09:08.122192 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:09:18.086868 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:09:28.086310 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:09:38.086070 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:09:48.051963 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:09:58.053682 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:10:08.054411 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:10:18.017866 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:10:28.018553 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:10:38.021645 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:10:41.469035 I | mvcc: store.index: compact 9135 2021-04-15 22:10:41.470374 I | mvcc: finished scheduled compaction at 9135 (took 646.1µs) 2021-04-15 22:10:47.984251 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:10:57.984475 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:11:07.985537 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:11:17.952021 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:11:27.951924 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:11:37.951693 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:11:47.917678 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:11:57.916227 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:12:07.918128 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:12:17.885796 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:12:27.882281 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:12:37.882256 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:12:47.849466 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:12:57.848655 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:13:07.848787 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:13:17.815140 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 22:13:27.822137 I | etcdserver/api/etcdhttp: /health OK (status code 200) ==> etcd [a7e27c226fbf] <== 2021-04-13 20:55:23.056303 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-13 20:55:33.056621 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-13 20:55:43.056850 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:21:24.293676 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true " with result "range_response_count:0 size:7" took too long (156.1821ms) to execute 2021-04-15 17:21:24.328169 W | etcdserver: read-only range request "key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true " with result "range_response_count:0 size:7" took too long (107.3799ms) to execute 2021-04-15 17:21:24.329082 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (108.4377ms) to execute 2021-04-15 17:21:24.419069 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true " with result "range_response_count:0 size:7" took too long (109.639ms) to execute 2021-04-15 17:21:24.419767 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (110.4433ms) to execute 2021-04-15 17:21:24.532505 W | etcdserver: read-only range request "key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true " with result "range_response_count:0 size:5" took too long (210.7563ms) to execute 2021-04-15 17:21:24.635082 I | mvcc: store.index: compact 3092 2021-04-15 17:21:24.703337 I | mvcc: finished scheduled compaction at 3092 (took 67.9091ms) 2021-04-15 17:21:24.711821 W | etcdserver: read-only range request "key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true " with result "range_response_count:0 size:7" took too long (100.2831ms) to execute 2021-04-15 17:21:24.729557 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:5" took too long (108.132ms) to execute 2021-04-15 17:21:24.729970 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true " with result "range_response_count:0 size:5" took too long (108.6583ms) to execute 2021-04-15 17:21:24.730125 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true " with result "range_response_count:0 size:5" took too long (116.9084ms) to execute 2021-04-15 17:21:24.730348 W | etcdserver: read-only range request "key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true " with result "range_response_count:0 size:7" took too long (119.2897ms) to execute 2021-04-15 17:21:24.730572 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (119.6995ms) to execute 2021-04-15 17:21:24.730889 W | etcdserver: read-only range request "key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true " with result "range_response_count:0 size:7" took too long (173.1731ms) to execute 2021-04-15 17:21:24.733113 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true " with result "range_response_count:0 size:5" took too long (113.0336ms) to execute 2021-04-15 17:21:39.064706 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:21:48.509885 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:21:58.476340 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:22:08.475756 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:22:18.476172 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:22:28.448553 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:22:38.443283 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:22:48.442364 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:22:58.408813 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:23:08.407384 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:23:18.408993 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:23:28.425735 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:23:38.372773 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:23:48.425849 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:23:58.340327 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:24:08.339829 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:24:18.338087 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:24:28.303886 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:24:38.303660 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:24:48.304244 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:24:58.270238 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:25:08.269611 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:25:18.276358 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:25:28.237014 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:25:38.235312 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:25:48.235278 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:25:58.203961 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:26:08.200924 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:26:18.201508 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:26:24.372000 I | mvcc: store.index: compact 3595 2021-04-15 17:26:24.375310 I | mvcc: finished scheduled compaction at 3595 (took 2.9774ms) 2021-04-15 17:26:28.166807 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:26:38.166946 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:26:48.167375 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:26:58.132568 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:27:08.133107 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:27:18.132362 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:27:28.097924 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:27:38.098057 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-04-15 17:27:41.469951 N | pkg/osutil: received terminated signal, shutting down... 2021-04-15 17:27:41.555499 I | etcdserver: skipped leadership transfer for single voting member cluster ==> kernel <== 22:13:43 up 3 days, 7:07, 0 users, load average: 0.63, 0.97, 0.88 Linux cloud-run-dev-internal 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.1 LTS" ==> kube-apiserver [71a07f2bb9c4] <== I0415 22:04:49.155481 1 trace.go:205] Trace[417527289]: "Get" url:/api/v1/namespaces/default/pods/google-apis-787c97d877-4chzv/log,user-agent:kubectl/v1.18.0 (darwin/amd64) kubernetes/9e99141,client:192.168.49.1 (15-Apr-2021 22:04:47.059) (total time: 2096ms): Trace[417527289]: ---"Transformed response object" 2091ms (22:04:00.155) Trace[417527289]: [2.0963551s] [2.0963551s] END I0415 22:05:04.055428 1 trace.go:205] Trace[1288858217]: "Get" url:/api/v1/namespaces/default/pods/google-apis-787c97d877-4chzv/log,user-agent:kubectl/v1.18.0 (darwin/amd64) kubernetes/9e99141,client:192.168.49.1 (15-Apr-2021 22:05:00.280) (total time: 3774ms): Trace[1288858217]: ---"Transformed response object" 3770ms (22:05:00.055) Trace[1288858217]: [3.7749891s] [3.7749891s] END I0415 22:05:04.593434 1 client.go:360] parsed scheme: "passthrough" I0415 22:05:04.593504 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:05:04.593527 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:05:36.535351 1 trace.go:205] Trace[504253724]: "Get" url:/api/v1/namespaces/default/pods/google-apis-787c97d877-4chzv/log,user-agent:kubectl/v1.18.0 (darwin/amd64) kubernetes/9e99141,client:192.168.49.1 (15-Apr-2021 22:05:30.998) (total time: 5536ms): Trace[504253724]: ---"Transformed response object" 5479ms (22:05:00.535) Trace[504253724]: [5.5364502s] [5.5364502s] END I0415 22:05:41.652773 1 client.go:360] parsed scheme: "passthrough" I0415 22:05:41.652828 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:05:41.652844 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:06:15.293398 1 client.go:360] parsed scheme: "passthrough" I0415 22:06:15.293484 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:06:15.293513 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:06:21.907494 1 trace.go:205] Trace[1159692341]: "Get" url:/api/v1/namespaces/default/pods/google-apis-787c97d877-4chzv/log,user-agent:kubectl/v1.18.0 (darwin/amd64) kubernetes/9e99141,client:192.168.49.1 (15-Apr-2021 22:06:19.367) (total time: 2540ms): Trace[1159692341]: ---"Transformed response object" 2531ms (22:06:00.907) Trace[1159692341]: [2.5402346s] [2.5402346s] END I0415 22:06:54.391848 1 client.go:360] parsed scheme: "passthrough" I0415 22:06:54.392049 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:06:54.392108 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:07:32.708252 1 client.go:360] parsed scheme: "passthrough" I0415 22:07:32.708360 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:07:32.708380 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:07:53.378612 1 trace.go:205] Trace[2091356677]: "Get" url:/api/v1/namespaces/default/pods/google-apis-787c97d877-4chzv/log,user-agent:kubectl/v1.18.0 (darwin/amd64) kubernetes/9e99141,client:192.168.49.1 (15-Apr-2021 22:07:51.456) (total time: 1922ms): Trace[2091356677]: ---"Transformed response object" 1919ms (22:07:00.378) Trace[2091356677]: [1.9221743s] [1.9221743s] END I0415 22:08:09.616132 1 client.go:360] parsed scheme: "passthrough" I0415 22:08:09.616225 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:08:09.616310 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:08:52.914265 1 client.go:360] parsed scheme: "passthrough" I0415 22:08:52.914361 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:08:52.914391 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:09:26.633144 1 client.go:360] parsed scheme: "passthrough" I0415 22:09:26.633229 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:09:26.633336 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:10:09.473993 1 client.go:360] parsed scheme: "passthrough" I0415 22:10:09.474330 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:10:09.474395 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:10:41.123570 1 trace.go:205] Trace[1648434687]: "Get" url:/api/v1/namespaces/default/pods/google-apis-787c97d877-4chzv/log,user-agent:kubectl/v1.18.0 (darwin/amd64) kubernetes/9e99141,client:192.168.49.1 (15-Apr-2021 22:10:39.579) (total time: 1578ms): Trace[1648434687]: ---"Transformed response object" 1572ms (22:10:00.123) Trace[1648434687]: [1.5781864s] [1.5781864s] END I0415 22:10:54.204811 1 client.go:360] parsed scheme: "passthrough" I0415 22:10:54.204887 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:10:54.204905 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:11:29.067641 1 client.go:360] parsed scheme: "passthrough" I0415 22:11:29.067776 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:11:29.067840 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:12:09.339090 1 client.go:360] parsed scheme: "passthrough" I0415 22:12:09.339358 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:12:09.339506 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:12:40.332903 1 client.go:360] parsed scheme: "passthrough" I0415 22:12:40.332947 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:12:40.332965 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0415 22:13:19.878140 1 client.go:360] parsed scheme: "passthrough" I0415 22:13:19.878205 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0415 22:13:19.878265 1 clientconn.go:948] ClientConn switching balancer to "pick_first" ==> kube-apiserver [8e934d1e76c8] <== W0415 17:27:44.348950 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.349127 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.349278 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.350181 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.351752 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.353432 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.370088 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.371697 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.372020 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.376067 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.389189 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.453221 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.453719 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.453900 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.454084 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.455969 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.471139 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.484162 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.484826 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.549319 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.549701 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.549911 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:44.559109 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:45.168058 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.174324 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.185933 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.381876 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.412880 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.451829 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.488796 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.553850 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.555302 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.559492 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.562942 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.578416 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.587896 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.595770 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.659183 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.659317 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.659665 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.659792 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.659923 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.666528 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.666749 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.666851 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.667043 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.667241 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.667929 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.668024 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.674758 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.684607 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.690840 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.690986 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.699834 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.751910 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.752146 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.752308 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.752558 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.753130 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W0415 17:27:46.764239 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... ==> kube-controller-manager [506ebed562dc] <== I0415 20:34:46.900027 1 event.go:291] "Event occurred" object="kube-system/etcd-cloud-run-dev-internal" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0415 20:34:46.916069 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-cloud-run-dev-internal" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0415 20:34:46.985258 1 event.go:291] "Event occurred" object="kube-system/kube-proxy-lchj5" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0415 20:34:47.087876 1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-cloud-run-dev-internal" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0415 20:34:47.103771 1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" E0415 20:34:47.103860 1 node_lifecycle_controller.go:847] unable to mark all pods NotReady on node cloud-run-dev-internal: Operation cannot be fulfilled on pods "etcd-cloud-run-dev-internal": the object has been modified; please apply your changes to the latest version and try again; queuing for retry I0415 20:34:47.103947 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0415 20:34:48.176302 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-create-5drdx" I0415 20:34:48.180028 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-555897b58d to 1" I0415 20:34:48.268450 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-555897b58d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-555897b58d-zg7fk" I0415 20:34:48.282266 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-patch-dd4nd" I0415 20:34:51.951032 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0415 20:34:52.104555 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. I0415 20:34:53.550564 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0415 20:35:07.983579 1 event.go:291] "Event occurred" object="default/google-apis" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set google-apis-54ccf54489 to 1" I0415 20:35:08.036785 1 event.go:291] "Event occurred" object="default/google-apis-54ccf54489" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: google-apis-54ccf54489-f5779" I0415 20:48:07.838074 1 namespace_controller.go:185] Namespace has been deleted gcp-auth I0415 20:48:15.145740 1 event.go:291] "Event occurred" object="cloud-run-dev-internal" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node cloud-run-dev-internal status is now: NodeNotReady" I0415 20:48:15.174913 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-cloud-run-dev-internal" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0415 20:48:15.220134 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-tgnqg" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0415 20:48:15.308678 1 event.go:291] "Event occurred" object="kube-system/kube-proxy-lchj5" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0415 20:48:15.325845 1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-cloud-run-dev-internal" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0415 20:48:15.340723 1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" W0415 20:48:15.393227 1 controller_utils.go:148] Failed to update status for pod "etcd-cloud-run-dev-internal_kube-system(c14399a2-4b4d-4baf-8b87-383291cf26b8)": Operation cannot be fulfilled on pods "etcd-cloud-run-dev-internal": the object has been modified; please apply your changes to the latest version and try again I0415 20:48:15.393662 1 event.go:291] "Event occurred" object="kube-system/etcd-cloud-run-dev-internal" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" E0415 20:48:15.410104 1 node_lifecycle_controller.go:847] unable to mark all pods NotReady on node cloud-run-dev-internal: Operation cannot be fulfilled on pods "etcd-cloud-run-dev-internal": the object has been modified; please apply your changes to the latest version and try again; queuing for retry I0415 20:48:15.410403 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0415 20:48:15.410674 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-cloud-run-dev-internal" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I0415 20:48:16.044477 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-create-dh4c2" I0415 20:48:16.095654 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-555897b58d to 1" I0415 20:48:16.121307 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-555897b58d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-555897b58d-2996m" I0415 20:48:16.145098 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-patch-jk724" I0415 20:48:19.733687 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0415 20:48:20.411001 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. I0415 20:48:20.537842 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0415 20:48:34.165998 1 event.go:291] "Event occurred" object="default/google-apis" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set google-apis-b78485b69 to 1" I0415 20:48:34.225676 1 event.go:291] "Event occurred" object="default/google-apis-b78485b69" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: google-apis-b78485b69-d9cdq" E0415 20:52:21.847908 1 tokens_controller.go:262] error synchronizing serviceaccount gcp-auth/default: secrets "default-token-mcghv" is forbidden: unable to create new content in namespace gcp-auth because it is being terminated I0415 22:00:41.700982 1 namespace_controller.go:185] Namespace has been deleted gcp-auth E0415 22:01:18.854180 1 resource_quota_controller.go:409] failed to discover resources: Get "https://192.168.49.2:8443/api?timeout=32s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) W0415 22:01:18.856244 1 garbagecollector.go:705] failed to discover preferred resources: Get "https://192.168.49.2:8443/api?timeout=32s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0415 22:01:19.719399 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-555897b58d to 1" I0415 22:01:19.723841 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-create-j7gjz" I0415 22:01:19.748365 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-555897b58d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-555897b58d-xfmjq" I0415 22:01:19.759292 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-patch-lmpc7" I0415 22:01:24.374828 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0415 22:01:24.774975 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0415 22:01:39.526613 1 event.go:291] "Event occurred" object="default/google-apis" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set google-apis-598dffc578 to 1" I0415 22:01:39.584709 1 event.go:291] "Event occurred" object="default/google-apis-598dffc578" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: google-apis-598dffc578-w5rrf" I0415 22:04:22.773096 1 namespace_controller.go:185] Namespace has been deleted gcp-auth I0415 22:04:26.594324 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-create-zc82z" I0415 22:04:26.626364 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-555897b58d to 1" I0415 22:04:26.656691 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-555897b58d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-555897b58d-5f8ch" I0415 22:04:26.668702 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-patch-42svs" I0415 22:04:32.770552 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0415 22:04:35.500631 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0415 22:04:36.305669 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I0415 22:04:39.488483 1 event.go:291] "Event occurred" object="default/google-apis" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set google-apis-787c97d877 to 1" I0415 22:04:39.507504 1 event.go:291] "Event occurred" object="default/google-apis-787c97d877" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: google-apis-787c97d877-4chzv" I0415 22:04:42.738340 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. ==> kube-controller-manager [580db1120b10] <== k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009220a0, 0x4da4180, 0xc000ef00c0, 0x4903e01, 0xc00009c0c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009220a0, 0x3b9aca00, 0x0, 0x1, 0xc00009c0c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0009220a0, 0x3b9aca00, 0xc00009c0c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1d1 goroutine 48 [select]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009220b0, 0x4da4180, 0xc000ae0060, 0x1, 0xc00009c0c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009220b0, 0xdf8475800, 0x0, 0x1, 0xc00009c0c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0009220b0, 0xdf8475800, 0xc00009c0c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24b goroutine 174 [IO wait]: internal/poll.runtime_pollWait(0x7f3a8ed3cd20, 0x72, 0x4da8160) /usr/local/go/src/runtime/netpoll.go:222 +0x55 internal/poll.(*pollDesc).wait(0xc000608398, 0x72, 0x4da8100, 0x6e12878, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:92 internal/poll.(*FD).Read(0xc000608380, 0xc000fc0000, 0x931, 0x931, 0x0, 0x0, 0x0) /usr/local/go/src/internal/poll/fd_unix.go:159 +0x1a5 net.(*netFD).Read(0xc000608380, 0xc000fc0000, 0x931, 0x931, 0x203000, 0x66809b, 0xc0000def60) /usr/local/go/src/net/fd_posix.go:55 +0x4f net.(*conn).Read(0xc0007e6090, 0xc000fc0000, 0x931, 0x931, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:182 +0x8e crypto/tls.(*atLeastReader).Read(0xc000298040, 0xc000fc0000, 0x931, 0x931, 0xaa, 0x92c, 0xc0011b5710) /usr/local/go/src/crypto/tls/conn.go:779 +0x62 bytes.(*Buffer).ReadFrom(0xc0000df080, 0x4d9ed80, 0xc000298040, 0x40bd05, 0x3f475a0, 0x464b8a0) /usr/local/go/src/bytes/buffer.go:204 +0xb1 crypto/tls.(*Conn).readFromUntil(0xc0000dee00, 0x4da5040, 0xc0007e6090, 0x5, 0xc0007e6090, 0x99) /usr/local/go/src/crypto/tls/conn.go:801 +0xf3 crypto/tls.(*Conn).readRecordOrCCS(0xc0000dee00, 0x0, 0x0, 0xc0011b5d18) /usr/local/go/src/crypto/tls/conn.go:608 +0x115 crypto/tls.(*Conn).readRecord(...) /usr/local/go/src/crypto/tls/conn.go:576 crypto/tls.(*Conn).Read(0xc0000dee00, 0xc0004ac000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:1252 +0x15f bufio.(*Reader).Read(0xc000749380, 0xc001114818, 0x9, 0x9, 0xc0011b5d18, 0x4905800, 0x9b77ab) /usr/local/go/src/bufio/bufio.go:227 +0x222 io.ReadAtLeast(0x4d9eba0, 0xc000749380, 0xc001114818, 0x9, 0x9, 0x9, 0xc00007c050, 0x0, 0x4d9efe0) /usr/local/go/src/io/io.go:314 +0x87 io.ReadFull(...) /usr/local/go/src/io/io.go:333 k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc001114818, 0x9, 0x9, 0x4d9eba0, 0xc000749380, 0x0, 0x0, 0xc0011b5dd0, 0x46d045) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0011147e0, 0xc0001f6120, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0011b5fa8, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1819 +0xd8 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0004d5e00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1741 +0x6f created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5 ==> kube-proxy [c492275438f4] <== I0413 18:16:40.339441 1 trace.go:205] Trace[512942425]: "iptables restore" (13-Apr-2021 18:16:38.186) (total time: 2152ms): Trace[512942425]: [2.152495079s] [2.152495079s] END I0413 18:17:49.486709 1 trace.go:205] Trace[1377238812]: "iptables restore" (13-Apr-2021 18:17:47.159) (total time: 2325ms): Trace[1377238812]: [2.325868209s] [2.325868209s] END I0413 18:17:58.822387 1 trace.go:205] Trace[1871153880]: "iptables restore" (13-Apr-2021 18:17:56.626) (total time: 2196ms): Trace[1871153880]: [2.196106162s] [2.196106162s] END I0413 18:18:07.372740 1 trace.go:205] Trace[1134284901]: "iptables restore" (13-Apr-2021 18:18:05.180) (total time: 2191ms): Trace[1134284901]: [2.191835884s] [2.191835884s] END I0413 18:18:26.596782 1 trace.go:205] Trace[1823275740]: "iptables restore" (13-Apr-2021 18:18:24.316) (total time: 2280ms): Trace[1823275740]: [2.280746188s] [2.280746188s] END I0413 18:18:36.175418 1 trace.go:205] Trace[263480583]: "iptables restore" (13-Apr-2021 18:18:33.933) (total time: 2242ms): Trace[263480583]: [2.242081472s] [2.242081472s] END I0413 18:19:08.146378 1 trace.go:205] Trace[1783720957]: "iptables restore" (13-Apr-2021 18:19:06.122) (total time: 2024ms): Trace[1783720957]: [2.024339747s] [2.024339747s] END I0413 18:20:49.574051 1 trace.go:205] Trace[613468836]: "iptables save" (13-Apr-2021 18:19:13.149) (total time: 96487ms): Trace[613468836]: [1m36.487834578s] [1m36.487834578s] END I0413 18:20:58.899452 1 trace.go:205] Trace[434475970]: "iptables restore" (13-Apr-2021 18:20:56.770) (total time: 2128ms): Trace[434475970]: [2.128895648s] [2.128895648s] END I0413 18:21:08.117417 1 trace.go:205] Trace[1657806444]: "iptables restore" (13-Apr-2021 18:21:05.948) (total time: 2168ms): Trace[1657806444]: [2.168351394s] [2.168351394s] END I0413 18:21:23.942057 1 trace.go:205] Trace[1198322605]: "iptables restore" (13-Apr-2021 18:21:21.776) (total time: 2165ms): Trace[1198322605]: [2.165906193s] [2.165906193s] END I0413 18:21:33.233149 1 trace.go:205] Trace[1596632495]: "iptables restore" (13-Apr-2021 18:21:31.113) (total time: 2119ms): Trace[1596632495]: [2.119408338s] [2.119408338s] END I0413 18:21:42.588366 1 trace.go:205] Trace[1526863175]: "iptables restore" (13-Apr-2021 18:21:40.458) (total time: 2129ms): Trace[1526863175]: [2.129504753s] [2.129504753s] END I0413 20:45:43.102906 1 trace.go:205] Trace[2017365615]: "iptables restore" (13-Apr-2021 20:45:40.924) (total time: 2178ms): Trace[2017365615]: [2.178609299s] [2.178609299s] END I0413 20:46:26.036585 1 trace.go:205] Trace[2095876233]: "iptables save" (13-Apr-2021 20:45:49.067) (total time: 36989ms): Trace[2095876233]: [36.989795638s] [36.989795638s] END I0413 20:46:41.683532 1 trace.go:205] Trace[1450079312]: "iptables restore" (13-Apr-2021 20:46:39.609) (total time: 2074ms): Trace[1450079312]: [2.074360912s] [2.074360912s] END I0413 20:46:50.707798 1 trace.go:205] Trace[867656234]: "iptables restore" (13-Apr-2021 20:46:48.384) (total time: 2322ms): Trace[867656234]: [2.322952777s] [2.322952777s] END I0413 20:46:59.681363 1 trace.go:205] Trace[2095951911]: "iptables restore" (13-Apr-2021 20:46:57.608) (total time: 2072ms): Trace[2095951911]: [2.072456098s] [2.072456098s] END I0413 20:47:18.042954 1 trace.go:205] Trace[1687596781]: "iptables restore" (13-Apr-2021 20:47:15.927) (total time: 2115ms): Trace[1687596781]: [2.115211939s] [2.115211939s] END I0413 20:47:27.523792 1 trace.go:205] Trace[471254596]: "iptables restore" (13-Apr-2021 20:47:25.217) (total time: 2306ms): Trace[471254596]: [2.306350412s] [2.306350412s] END I0413 20:52:10.920860 1 trace.go:205] Trace[1295731470]: "iptables restore" (13-Apr-2021 20:52:08.745) (total time: 2175ms): Trace[1295731470]: [2.175203668s] [2.175203668s] END I0413 20:52:20.229004 1 trace.go:205] Trace[573272458]: "iptables restore" (13-Apr-2021 20:52:17.854) (total time: 2374ms): Trace[573272458]: [2.37485626s] [2.37485626s] END I0413 20:52:39.227902 1 trace.go:205] Trace[2031350684]: "iptables restore" (13-Apr-2021 20:52:36.815) (total time: 2412ms): Trace[2031350684]: [2.412735411s] [2.412735411s] END I0413 20:52:48.701350 1 trace.go:205] Trace[810026059]: "iptables restore" (13-Apr-2021 20:52:46.533) (total time: 2167ms): Trace[810026059]: [2.167475612s] [2.167475612s] END I0413 20:55:04.849742 1 trace.go:205] Trace[1103747872]: "iptables restore" (13-Apr-2021 20:55:02.638) (total time: 2211ms): Trace[1103747872]: [2.211238953s] [2.211238953s] END I0413 20:55:14.299219 1 trace.go:205] Trace[63108458]: "iptables restore" (13-Apr-2021 20:55:12.075) (total time: 2223ms): Trace[63108458]: [2.22357209s] [2.22357209s] END I0413 20:55:23.515642 1 trace.go:205] Trace[1858396253]: "iptables restore" (13-Apr-2021 20:55:21.190) (total time: 2325ms): Trace[1858396253]: [2.325542372s] [2.325542372s] END I0413 20:55:33.169491 1 trace.go:205] Trace[235081247]: "iptables restore" (13-Apr-2021 20:55:30.967) (total time: 2201ms): Trace[235081247]: [2.2015066s] [2.2015066s] END I0413 20:55:45.711243 1 trace.go:205] Trace[791919494]: "iptables restore" (13-Apr-2021 20:55:43.444) (total time: 2286ms): Trace[791919494]: [2.286341956s] [2.286341956s] END I0415 17:21:24.723416 1 trace.go:205] Trace[1258053299]: "iptables save" (13-Apr-2021 20:55:50.416) (total time: 160091227ms): Trace[1258053299]: [44h28m11.227689364s] [44h28m11.227689364s] END ==> kube-proxy [faac686fc1d8] <== I0415 17:28:04.293116 1 conntrack.go:52] Setting nf_conntrack_max to 131072 E0415 17:28:04.293639 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime]) I0415 17:28:04.298471 1 config.go:224] Starting endpoint slice config controller I0415 17:28:04.299167 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0415 17:28:04.299286 1 config.go:315] Starting service config controller I0415 17:28:04.299324 1 shared_informer.go:240] Waiting for caches to sync for service config I0415 17:28:04.399409 1 shared_informer.go:247] Caches are synced for endpoint slice config I0415 17:28:04.399430 1 shared_informer.go:247] Caches are synced for service config I0415 17:28:19.849442 1 trace.go:205] Trace[845351321]: "iptables restore" (15-Apr-2021 17:28:17.756) (total time: 2092ms): Trace[845351321]: [2.0929319s] [2.0929319s] END I0415 17:28:43.109740 1 trace.go:205] Trace[770527731]: "iptables restore" (15-Apr-2021 17:28:41.003) (total time: 2106ms): Trace[770527731]: [2.1066572s] [2.1066572s] END I0415 18:56:21.408683 1 trace.go:205] Trace[169643447]: "iptables save" (15-Apr-2021 18:56:03.400) (total time: 18007ms): Trace[169643447]: [18.0078979s] [18.0078979s] END I0415 18:57:00.405951 1 trace.go:205] Trace[814848829]: "iptables restore" (15-Apr-2021 18:56:58.359) (total time: 2046ms): Trace[814848829]: [2.0463987s] [2.0463987s] END I0415 18:58:07.815330 1 trace.go:205] Trace[970102917]: "iptables save" (15-Apr-2021 18:57:31.904) (total time: 35944ms): Trace[970102917]: [35.9446043s] [35.9446043s] END I0415 18:58:42.134164 1 trace.go:205] Trace[1855007311]: "iptables restore" (15-Apr-2021 18:58:40.133) (total time: 2000ms): Trace[1855007311]: [2.0001782s] [2.0001782s] END E0415 20:33:58.895424 1 proxier.go:882] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: timed out while checking rules I0415 20:33:58.908778 1 proxier.go:866] Sync failed; retrying in 30s I0415 20:34:07.344349 1 trace.go:205] Trace[2055518948]: "iptables restore" (15-Apr-2021 20:34:05.336) (total time: 2002ms): Trace[2055518948]: [2.0020814s] [2.0020814s] END I0415 20:34:54.658627 1 trace.go:205] Trace[1014385576]: "iptables restore" (15-Apr-2021 20:34:52.569) (total time: 2088ms): Trace[1014385576]: [2.0889681s] [2.0889681s] END I0415 20:35:16.014120 1 trace.go:205] Trace[1964160077]: "iptables restore" (15-Apr-2021 20:35:14.035) (total time: 2012ms): Trace[1964160077]: [2.0123471s] [2.0123471s] END I0415 20:35:25.459791 1 trace.go:205] Trace[1724314730]: "iptables restore" (15-Apr-2021 20:35:22.732) (total time: 2727ms): Trace[1724314730]: [2.7275676s] [2.7275676s] END I0415 20:35:36.188342 1 trace.go:205] Trace[1640660764]: "iptables restore" (15-Apr-2021 20:35:33.989) (total time: 2198ms): Trace[1640660764]: [2.1985564s] [2.1985564s] END I0415 20:35:45.539210 1 trace.go:205] Trace[770508139]: "iptables restore" (15-Apr-2021 20:35:43.239) (total time: 2299ms): Trace[770508139]: [2.2997076s] [2.2997076s] END I0415 20:48:54.149341 1 trace.go:205] Trace[597849655]: "iptables restore" (15-Apr-2021 20:48:52.129) (total time: 2019ms): Trace[597849655]: [2.0195786s] [2.0195786s] END I0415 22:00:43.835423 1 trace.go:205] Trace[60583697]: "iptables restore" (15-Apr-2021 20:52:28.329) (total time: 4100142ms): Trace[60583697]: [1h8m20.1429402s] [1h8m20.1429402s] END I0415 22:00:52.230880 1 trace.go:205] Trace[1404412674]: "iptables restore" (15-Apr-2021 22:00:50.095) (total time: 2135ms): Trace[1404412674]: [2.1352604s] [2.1352604s] END I0415 22:01:53.824942 1 trace.go:205] Trace[12364305]: "iptables save" (15-Apr-2021 22:01:51.491) (total time: 2333ms): Trace[12364305]: [2.3332878s] [2.3332878s] END I0415 22:04:24.480771 1 trace.go:205] Trace[1707159106]: "iptables restore" (15-Apr-2021 22:02:21.659) (total time: 122956ms): Trace[1707159106]: [2m2.9563956s] [2m2.9563956s] END I0415 22:04:40.971345 1 trace.go:205] Trace[315040541]: "iptables restore" (15-Apr-2021 22:04:38.962) (total time: 2042ms): Trace[315040541]: [2.0425809s] [2.0425809s] END I0415 22:04:57.463278 1 trace.go:205] Trace[811578257]: "iptables restore" (15-Apr-2021 22:04:55.264) (total time: 2199ms): Trace[811578257]: [2.1990106s] [2.1990106s] END I0415 22:05:05.454091 1 trace.go:205] Trace[793029521]: "iptables restore" (15-Apr-2021 22:05:03.277) (total time: 2176ms): Trace[793029521]: [2.1764886s] [2.1764886s] END I0415 22:05:13.955834 1 trace.go:205] Trace[1583679337]: "iptables restore" (15-Apr-2021 22:05:11.871) (total time: 2084ms): Trace[1583679337]: [2.0841566s] [2.0841566s] END I0415 22:05:47.866604 1 trace.go:205] Trace[1544703251]: "iptables restore" (15-Apr-2021 22:05:45.677) (total time: 2189ms): Trace[1544703251]: [2.1894126s] [2.1894126s] END I0415 22:06:27.310413 1 trace.go:205] Trace[1795816618]: "iptables restore" (15-Apr-2021 22:06:25.293) (total time: 2016ms): Trace[1795816618]: [2.0164162s] [2.0164162s] END I0415 22:06:35.874433 1 trace.go:205] Trace[51527792]: "iptables restore" (15-Apr-2021 22:06:33.661) (total time: 2212ms): Trace[51527792]: [2.2127892s] [2.2127892s] END I0415 22:08:07.388136 1 trace.go:205] Trace[1744423218]: "iptables restore" (15-Apr-2021 22:08:05.289) (total time: 2098ms): Trace[1744423218]: [2.0986822s] [2.0986822s] END ==> kube-scheduler [0f95485cf7e3] <== E0413 20:45:33.007516 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-555897b58d-ncm5t.16757db6fe81f274", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"0b027249-9603-48be-add5-a5ee90e75b16", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753934666, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc000756e00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000756e20)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x1038198, ext:63753934666, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc000756e40), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-555897b58d-ncm5t", UID:"0a694071-5722-44d0-add0-2823c1b83c71", APIVersion:"v1", ResourceVersion:"2227", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'namespaces "gcp-auth" not found' (will not retry!) E0413 20:45:33.065436 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-patch-79g5n.16757de22d7df541", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"790f3bf7-a3b9-4392-9781-d891db26c40a", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753934851, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc00050d4e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00050d520)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x1d239070, ext:63753934851, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc00050d560), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-certs-patch-79g5n", UID:"a9828e27-eb31-4b77-8475-dbb2771afc8d", APIVersion:"v1", ResourceVersion:"2513", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'namespaces "gcp-auth" not found' (will not retry!) E0413 20:45:33.122568 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-patch-2rzwn.16757db6ff563809", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"df4b2ea6-a433-4ae7-a8bf-e1e9f81a9f76", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753934666, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc000756f80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000756fa0)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x1d7c570, ext:63753934666, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc000756fc0), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-certs-patch-2rzwn", UID:"8f86f9a9-3c11-4212-985d-8fa732bf6189", APIVersion:"v1", ResourceVersion:"2230", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'namespaces "gcp-auth" not found' (will not retry!) E0413 20:45:33.178801 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-create-4ck46.16757db6fd5c02b4", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"2f58b2f1-c29c-4499-a1d0-dc4eec53f73d", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753934665, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc0001c3aa0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0001c3ac0)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x3b7855b0, ext:63753934665, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc0001c3ae0), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-certs-create-4ck46", UID:"7ab52af1-ef4e-4b37-9046-c3bf1b5c45c5", APIVersion:"v1", ResourceVersion:"2217", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'namespaces "gcp-auth" not found' (will not retry!) E0413 20:45:33.238079 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-555897b58d-ncm5t.16757db6fe81f274", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"0b027249-9603-48be-add5-a5ee90e75b16", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753934666, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc000756e00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000756e20)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x1038198, ext:63753934666, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc000756e40), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-555897b58d-ncm5t", UID:"0a694071-5722-44d0-add0-2823c1b83c71", APIVersion:"v1", ResourceVersion:"2227", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "gcp-auth-555897b58d-ncm5t.16757db6fe81f274" is invalid: series.count: Invalid value: "": should be at least 2' (will not retry!) E0413 20:45:33.244110 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-patch-79g5n.16757de22d7df541", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"790f3bf7-a3b9-4392-9781-d891db26c40a", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753934851, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc00050d4e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00050d520)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x1d239070, ext:63753934851, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc00050d560), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-certs-patch-79g5n", UID:"a9828e27-eb31-4b77-8475-dbb2771afc8d", APIVersion:"v1", ResourceVersion:"2513", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "gcp-auth-certs-patch-79g5n.16757de22d7df541" is invalid: series.count: Invalid value: "": should be at least 2' (will not retry!) E0413 20:45:33.247249 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-patch-2rzwn.16757db6ff563809", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"df4b2ea6-a433-4ae7-a8bf-e1e9f81a9f76", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753934666, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc000756f80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000756fa0)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x1d7c570, ext:63753934666, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc000756fc0), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-certs-patch-2rzwn", UID:"8f86f9a9-3c11-4212-985d-8fa732bf6189", APIVersion:"v1", ResourceVersion:"2230", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "gcp-auth-certs-patch-2rzwn.16757db6ff563809" is invalid: series.count: Invalid value: "": should be at least 2' (will not retry!) E0413 20:45:33.251097 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-555897b58d-hr8pr.16757de22d7d48ed", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"2ec8f29e-6c24-4fba-9c4f-1a5fe7171bfc", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753934851, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc000868340), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008683a0)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x1d22dcc0, ext:63753934851, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc0008683c0), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-555897b58d-hr8pr", UID:"f83648bb-ada9-4abb-8c38-0d135553ad08", APIVersion:"v1", ResourceVersion:"2508", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "gcp-auth-555897b58d-hr8pr.16757de22d7d48ed" is invalid: series.count: Invalid value: "": should be at least 2' (will not retry!) E0413 20:45:33.254427 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-create-kt7bv.16757de22be4ab5e", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"94455bc5-8399-4e6d-9bf3-f9b1064fd1c7", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753934851, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc00050c720), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00050c740)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x1b8a3b60, ext:63753934851, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc00050cb60), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-certs-create-kt7bv", UID:"82ad281b-7a21-4a2e-845f-98c9d479d9b6", APIVersion:"v1", ResourceVersion:"2498", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "gcp-auth-certs-create-kt7bv.16757de22be4ab5e" is invalid: series.count: Invalid value: "": should be at least 2' (will not retry!) E0415 17:22:24.236864 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-create-vzd2k.1675864b57639ee1", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"33246fa6-5378-4aad-9a08-773421aa494e", ResourceVersion:"3400", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc0006abb80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006abbc0)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0xf47f4f0, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc0006abc00), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-certs-create-vzd2k", UID:"2e0a52a9-5544-4a55-a8a8-3eedb8bed780", APIVersion:"v1", ResourceVersion:"3367", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'the server was unable to return a response in the time allotted, but may still be processing the request (patch events.events.k8s.io gcp-auth-certs-create-vzd2k.1675864b57639ee1)' (will not retry!) I0415 17:22:26.945528 1 trace.go:205] Trace[2120466518]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:21:26.996) (total time: 60015ms): Trace[2120466518]: [1m0.0151297s] [1m0.0151297s] END E0415 17:22:26.945645 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: the server was unable to return a response in the time allotted, but may still be processing the request (get replicationcontrollers) I0415 17:22:27.030919 1 trace.go:205] Trace[42252729]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:21:27.092) (total time: 60006ms): Trace[42252729]: [1m0.0068795s] [1m0.0068795s] END E0415 17:22:27.030989 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: the server was unable to return a response in the time allotted, but may still be processing the request (get csinodes.storage.k8s.io) I0415 17:22:27.074772 1 trace.go:205] Trace[372904764]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:21:27.139) (total time: 60003ms): Trace[372904764]: [1m0.0037352s] [1m0.0037352s] END E0415 17:22:27.074835 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) E0415 17:23:24.172057 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-555897b58d-qzxzq.1675864b5822a34d", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"18a2da69-95ae-48d6-b30b-fe5f556c1dc2", ResourceVersion:"3401", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc000869c20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000869c40)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x1006f760, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc000869c80), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-555897b58d-qzxzq", UID:"1b66c81c-ad8e-4138-87fa-b438ec00ccd4", APIVersion:"v1", ResourceVersion:"3378", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'the server was unable to return a response in the time allotted, but may still be processing the request (patch events.events.k8s.io gcp-auth-555897b58d-qzxzq.1675864b5822a34d)' (will not retry!) I0415 17:23:28.937520 1 trace.go:205] Trace[58026925]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:22:28.989) (total time: 60013ms): Trace[58026925]: [1m0.0132374s] [1m0.0132374s] END E0415 17:23:28.937576 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: the server was unable to return a response in the time allotted, but may still be processing the request (get replicationcontrollers) I0415 17:23:29.373586 1 trace.go:205] Trace[1160877437]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:22:29.388) (total time: 60053ms): Trace[1160877437]: [1m0.0532087s] [1m0.0532087s] END E0415 17:23:29.373615 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: the server was unable to return a response in the time allotted, but may still be processing the request (get csinodes.storage.k8s.io) I0415 17:23:29.376607 1 trace.go:205] Trace[753375671]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:22:29.433) (total time: 60011ms): Trace[753375671]: [1m0.0110629s] [1m0.0110629s] END E0415 17:23:29.376656 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) E0415 17:24:24.105981 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-patch-dtdjh.1675864b58975bbf", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"61144508-56fe-4500-b377-7ae200904459", ResourceVersion:"3399", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc0006abd20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006abd40)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x107bb230, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc0006abd60), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-certs-patch-dtdjh", UID:"d81c5efb-c331-4b62-b7cf-37e47c946da8", APIVersion:"v1", ResourceVersion:"3381", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'the server was unable to return a response in the time allotted, but may still be processing the request (patch events.events.k8s.io gcp-auth-certs-patch-dtdjh.1675864b58975bbf)' (will not retry!) I0415 17:24:34.010434 1 trace.go:205] Trace[278502789]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:23:34.076) (total time: 60003ms): Trace[278502789]: [1m0.0030473s] [1m0.0030473s] END E0415 17:24:34.010480 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: the server was unable to return a response in the time allotted, but may still be processing the request (get replicationcontrollers) I0415 17:24:34.756733 1 trace.go:205] Trace[775833123]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:23:34.820) (total time: 60004ms): Trace[775833123]: [1m0.004835s] [1m0.004835s] END E0415 17:24:34.757197 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: the server was unable to return a response in the time allotted, but may still be processing the request (get csinodes.storage.k8s.io) I0415 17:24:35.675021 1 trace.go:205] Trace[1845725103]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:23:35.740) (total time: 60002ms): Trace[1845725103]: [1m0.0029664s] [1m0.0029664s] END E0415 17:24:35.675092 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) E0415 17:25:24.040920 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-create-vzd2k.1675864b57639ee1", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"33246fa6-5378-4aad-9a08-773421aa494e", ResourceVersion:"3400", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc0006abb80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006abbc0)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0xf47f4f0, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc0006abc00), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-certs-create-vzd2k", UID:"2e0a52a9-5544-4a55-a8a8-3eedb8bed780", APIVersion:"v1", ResourceVersion:"3367", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'the server was unable to return a response in the time allotted, but may still be processing the request (patch events.events.k8s.io gcp-auth-certs-create-vzd2k.1675864b57639ee1)' (will not retry!) I0415 17:25:42.004413 1 trace.go:205] Trace[867761151]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:24:42.070) (total time: 60002ms): Trace[867761151]: [1m0.0025777s] [1m0.0025777s] END E0415 17:25:42.004486 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: the server was unable to return a response in the time allotted, but may still be processing the request (get replicationcontrollers) I0415 17:25:45.696364 1 trace.go:205] Trace[1700834152]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:24:45.754) (total time: 60010ms): Trace[1700834152]: [1m0.0107335s] [1m0.0107335s] END E0415 17:25:45.696427 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: the server was unable to return a response in the time allotted, but may still be processing the request (get csinodes.storage.k8s.io) I0415 17:25:46.586804 1 trace.go:205] Trace[976058896]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:24:46.652) (total time: 60002ms): Trace[976058896]: [1m0.0025545s] [1m0.0025545s] END E0415 17:25:46.586853 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) E0415 17:26:23.974315 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-555897b58d-qzxzq.1675864b5822a34d", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"18a2da69-95ae-48d6-b30b-fe5f556c1dc2", ResourceVersion:"3401", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc000869c20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000869c40)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x1006f760, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc000869c80), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-555897b58d-qzxzq", UID:"1b66c81c-ad8e-4138-87fa-b438ec00ccd4", APIVersion:"v1", ResourceVersion:"3378", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'the server was unable to return a response in the time allotted, but may still be processing the request (patch events.events.k8s.io gcp-auth-555897b58d-qzxzq.1675864b5822a34d)' (will not retry!) I0415 17:26:56.137363 1 trace.go:205] Trace[2035803186]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:25:56.184) (total time: 60021ms): Trace[2035803186]: [1m0.0210368s] [1m0.0210368s] END E0415 17:26:56.137430 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: the server was unable to return a response in the time allotted, but may still be processing the request (get replicationcontrollers) I0415 17:27:06.335166 1 trace.go:205] Trace[1233179917]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:26:06.397) (total time: 60005ms): Trace[1233179917]: [1m0.0058945s] [1m0.0058945s] END E0415 17:27:06.335231 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: the server was unable to return a response in the time allotted, but may still be processing the request (get csinodes.storage.k8s.io) I0415 17:27:06.589397 1 trace.go:205] Trace[993011685]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (15-Apr-2021 17:26:06.652) (total time: 60005ms): Trace[993011685]: [1m0.005618s] [1m0.005618s] END E0415 17:27:06.589452 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) E0415 17:27:23.907673 1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"gcp-auth-certs-patch-dtdjh.1675864b58975bbf", GenerateName:"", Namespace:"gcp-auth", SelfLink:"", UID:"61144508-56fe-4500-b377-7ae200904459", ResourceVersion:"3399", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:(*v1.Time)(0xc0006abd20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0006abd40)}}}, EventTime:v1.MicroTime{Time:time.Time{wall:0x107bb230, ext:63753944099, loc:(*time.Location)(0x2cebb60)}}, Series:(*v1.EventSeries)(0xc0006abd60), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cloud-run-dev-internal", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"gcp-auth", Name:"gcp-auth-certs-patch-dtdjh", UID:"d81c5efb-c331-4b62-b7cf-37e47c946da8", APIVersion:"v1", ResourceVersion:"3381", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't tolerate.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'the server was unable to return a response in the time allotted, but may still be processing the request (patch events.events.k8s.io gcp-auth-certs-patch-dtdjh.1675864b58975bbf)' (will not retry!) ==> kube-scheduler [23d4f3481b90] <== I0415 17:27:48.562692 1 serving.go:331] Generated self-signed cert in-memory W0415 17:27:49.365375 1 authentication.go:332] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused W0415 17:27:49.365532 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous. W0415 17:27:49.365564 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0415 17:27:49.379305 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0415 17:27:49.379640 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0415 17:27:49.379660 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0415 17:27:49.379679 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file E0415 17:27:49.381884 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.381850 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.382076 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.382439 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.382439 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.382495 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.382785 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.382666 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.382878 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.383061 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.383448 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:49.383882 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.219828 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.261595 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.264521 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.440575 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.513734 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.592062 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.644702 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.649852 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.706916 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.820868 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.854816 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:50.876939 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:51.911596 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:52.410946 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:52.438720 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:52.481859 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:52.958696 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:53.108834 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:53.240908 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:53.258678 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:53.258777 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:53.676098 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:53.822297 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:54.053408 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused E0415 17:27:58.550736 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0415 17:27:58.629620 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0415 17:27:58.629777 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0415 17:27:58.632793 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0415 17:27:58.632885 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0415 17:27:58.633632 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0415 17:27:58.633933 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0415 17:27:58.634118 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope I0415 17:27:58.746425 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Tue 2021-04-13 18:04:00 UTC, end at Thu 2021-04-15 22:14:08 UTC. -- Apr 15 22:08:28 cloud-run-dev-internal kubelet[60137]: E0415 22:08:28.622452 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 2m40s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:08:43 cloud-run-dev-internal kubelet[60137]: I0415 22:08:43.587278 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:08:43 cloud-run-dev-internal kubelet[60137]: E0415 22:08:43.587828 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 2m40s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:08:54 cloud-run-dev-internal kubelet[60137]: I0415 22:08:54.587487 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:08:54 cloud-run-dev-internal kubelet[60137]: E0415 22:08:54.588121 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 2m40s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:09:09 cloud-run-dev-internal kubelet[60137]: I0415 22:09:09.586775 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:09:09 cloud-run-dev-internal kubelet[60137]: E0415 22:09:09.587940 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 2m40s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:09:23 cloud-run-dev-internal kubelet[60137]: I0415 22:09:23.554078 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:09:23 cloud-run-dev-internal kubelet[60137]: E0415 22:09:23.554697 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 2m40s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:09:30 cloud-run-dev-internal kubelet[60137]: W0415 22:09:30.178074 60137 sysinfo.go:203] Nodes topology is not available, providing CPU topology Apr 15 22:09:30 cloud-run-dev-internal kubelet[60137]: W0415 22:09:30.178478 60137 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory Apr 15 22:09:34 cloud-run-dev-internal kubelet[60137]: I0415 22:09:34.552911 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:09:34 cloud-run-dev-internal kubelet[60137]: E0415 22:09:34.553536 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 2m40s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:09:48 cloud-run-dev-internal kubelet[60137]: I0415 22:09:48.520875 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:09:48 cloud-run-dev-internal kubelet[60137]: E0415 22:09:48.521602 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 2m40s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:09:59 cloud-run-dev-internal kubelet[60137]: I0415 22:09:59.519418 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:09:59 cloud-run-dev-internal kubelet[60137]: E0415 22:09:59.519810 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 2m40s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:10:11 cloud-run-dev-internal kubelet[60137]: I0415 22:10:11.485643 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:10:11 cloud-run-dev-internal kubelet[60137]: E0415 22:10:11.486791 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 2m40s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:10:24 cloud-run-dev-internal kubelet[60137]: I0415 22:10:24.485492 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:10:24 cloud-run-dev-internal kubelet[60137]: E0415 22:10:24.485830 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 2m40s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:10:38 cloud-run-dev-internal kubelet[60137]: I0415 22:10:38.485319 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:10:39 cloud-run-dev-internal kubelet[60137]: W0415 22:10:39.477863 60137 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/google-apis-787c97d877-4chzv through plugin: invalid network status for Apr 15 22:10:40 cloud-run-dev-internal kubelet[60137]: W0415 22:10:40.465414 60137 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/google-apis-787c97d877-4chzv through plugin: invalid network status for Apr 15 22:10:40 cloud-run-dev-internal kubelet[60137]: I0415 22:10:40.473667 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: a846e7f545498ba6426192ad62a8cb8b872a84b529b5395b8b34f8b6375303ca Apr 15 22:10:40 cloud-run-dev-internal kubelet[60137]: I0415 22:10:40.473926 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:10:40 cloud-run-dev-internal kubelet[60137]: E0415 22:10:40.474242 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:10:41 cloud-run-dev-internal kubelet[60137]: W0415 22:10:41.485065 60137 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/google-apis-787c97d877-4chzv through plugin: invalid network status for Apr 15 22:10:52 cloud-run-dev-internal kubelet[60137]: I0415 22:10:52.450955 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:10:52 cloud-run-dev-internal kubelet[60137]: E0415 22:10:52.451276 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:11:03 cloud-run-dev-internal kubelet[60137]: I0415 22:11:03.451193 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:11:03 cloud-run-dev-internal kubelet[60137]: E0415 22:11:03.451709 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:11:14 cloud-run-dev-internal kubelet[60137]: I0415 22:11:14.417374 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:11:14 cloud-run-dev-internal kubelet[60137]: E0415 22:11:14.417751 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:11:29 cloud-run-dev-internal kubelet[60137]: I0415 22:11:29.418569 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:11:29 cloud-run-dev-internal kubelet[60137]: E0415 22:11:29.419153 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:11:42 cloud-run-dev-internal kubelet[60137]: I0415 22:11:42.382989 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:11:42 cloud-run-dev-internal kubelet[60137]: E0415 22:11:42.383264 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:11:53 cloud-run-dev-internal kubelet[60137]: I0415 22:11:53.382821 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:11:53 cloud-run-dev-internal kubelet[60137]: E0415 22:11:53.383120 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:12:06 cloud-run-dev-internal kubelet[60137]: I0415 22:12:06.383545 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:12:06 cloud-run-dev-internal kubelet[60137]: E0415 22:12:06.385213 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:12:19 cloud-run-dev-internal kubelet[60137]: I0415 22:12:19.350932 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:12:19 cloud-run-dev-internal kubelet[60137]: E0415 22:12:19.351825 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:12:31 cloud-run-dev-internal kubelet[60137]: I0415 22:12:31.349667 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:12:31 cloud-run-dev-internal kubelet[60137]: E0415 22:12:31.350148 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:12:44 cloud-run-dev-internal kubelet[60137]: I0415 22:12:44.315347 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:12:44 cloud-run-dev-internal kubelet[60137]: E0415 22:12:44.315724 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:12:58 cloud-run-dev-internal kubelet[60137]: I0415 22:12:58.315981 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:12:58 cloud-run-dev-internal kubelet[60137]: E0415 22:12:58.317312 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:13:10 cloud-run-dev-internal kubelet[60137]: I0415 22:13:10.283487 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:13:10 cloud-run-dev-internal kubelet[60137]: E0415 22:13:10.284374 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:13:21 cloud-run-dev-internal kubelet[60137]: I0415 22:13:21.282148 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:13:21 cloud-run-dev-internal kubelet[60137]: E0415 22:13:21.282950 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:13:36 cloud-run-dev-internal kubelet[60137]: I0415 22:13:36.281763 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:13:36 cloud-run-dev-internal kubelet[60137]: E0415 22:13:36.282182 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:13:48 cloud-run-dev-internal kubelet[60137]: I0415 22:13:48.247601 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:13:48 cloud-run-dev-internal kubelet[60137]: E0415 22:13:48.247995 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" Apr 15 22:14:00 cloud-run-dev-internal kubelet[60137]: I0415 22:14:00.248545 60137 scope.go:95] [topologymanager] RemoveContainer - Container ID: c51064c2bb918f4429dbba4898ffdae24d4ed62042c62ba6a5549036d1f7f9c0 Apr 15 22:14:00 cloud-run-dev-internal kubelet[60137]: E0415 22:14:00.249096 60137 pod_workers.go:191] Error syncing pod 735a1e68-9b5e-4632-93d8-32051943a134 ("google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)"), skipping: failed to "StartContainer" for "google-apis-container" with CrashLoopBackOff: "back-off 5m0s restarting failed container=google-apis-container pod=google-apis-787c97d877-4chzv_default(735a1e68-9b5e-4632-93d8-32051943a134)" ==> storage-provisioner [09310f49a862] <== I0415 20:34:47.754420 1 storage_provisioner.go:115] Initializing the minikube storage provisioner... I0415 20:34:47.802802 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service! I0415 20:34:47.804414 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0415 20:35:05.286041 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0415 20:35:05.286579 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f245c15b-3e3d-4ad0-b615-adc20774f0b9", APIVersion:"v1", ResourceVersion:"8089", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cloud-run-dev-internal_0acda389-a165-47c4-a85b-5d2078d42851 became leader I0415 20:35:05.287356 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_cloud-run-dev-internal_0acda389-a165-47c4-a85b-5d2078d42851! I0415 20:35:05.391686 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_cloud-run-dev-internal_0acda389-a165-47c4-a85b-5d2078d42851! I0415 22:00:51.831523 1 leaderelection.go:288] failed to renew lease kube-system/k8s.io-minikube-hostpath: failed to tryAcquireOrRenew context deadline exceeded F0415 22:00:51.831602 1 controller.go:877] leaderelection lost ==> storage-provisioner [4c06bd35c496] <== I0415 22:01:19.482024 1 storage_provisioner.go:115] Initializing the minikube storage provisioner... I0415 22:01:19.495072 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service! I0415 22:01:19.495616 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0415 22:01:36.926785 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0415 22:01:36.927186 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_cloud-run-dev-internal_626a4969-d268-4f78-9ece-5e0b937e5e84! I0415 22:01:36.928948 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f245c15b-3e3d-4ad0-b615-adc20774f0b9", APIVersion:"v1", ResourceVersion:"8775", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cloud-run-dev-internal_626a4969-d268-4f78-9ece-5e0b937e5e84 became leader I0415 22:01:37.027554 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_cloud-run-dev-internal_626a4969-d268-4f78-9ece-5e0b937e5e84! ==> Audit <== |------------|--------------------------------|------------------------|-----------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |------------|--------------------------------|------------------------|-----------|---------|-------------------------------|-------------------------------| | unpause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 14:56:20 EDT | Thu, 15 Apr 2021 14:56:21 EDT | | | cloud-run-dev-internal | | | | | | | addons | enable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 14:56:22 EDT | Thu, 15 Apr 2021 14:56:32 EDT | | | cloud-run-dev-internal | | | | | | | docker-env | --shell none -p | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 14:56:32 EDT | Thu, 15 Apr 2021 14:56:33 EDT | | | cloud-run-dev-internal | | | | | | | addons | list --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 14:57:19 EDT | Thu, 15 Apr 2021 14:57:19 EDT | | | cloud-run-dev-internal | | | | | | | | --output json | | | | | | | addons | disable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 14:57:19 EDT | Thu, 15 Apr 2021 14:57:31 EDT | | | cloud-run-dev-internal | | | | | | | pause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 14:57:31 EDT | Thu, 15 Apr 2021 14:57:32 EDT | | | cloud-run-dev-internal | | | | | | | unpause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 14:58:06 EDT | Thu, 15 Apr 2021 14:58:07 EDT | | | cloud-run-dev-internal | | | | | | | addons | enable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 14:58:08 EDT | Thu, 15 Apr 2021 14:58:29 EDT | | | cloud-run-dev-internal | | | | | | | docker-env | --shell none -p | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 14:58:29 EDT | Thu, 15 Apr 2021 14:58:31 EDT | | | cloud-run-dev-internal | | | | | | | addons | list --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 15:00:21 EDT | Thu, 15 Apr 2021 15:00:21 EDT | | | cloud-run-dev-internal | | | | | | | | --output json | | | | | | | addons | disable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 15:00:21 EDT | Thu, 15 Apr 2021 15:00:28 EDT | | | cloud-run-dev-internal | | | | | | | pause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 15:00:28 EDT | Thu, 15 Apr 2021 15:00:29 EDT | | | cloud-run-dev-internal | | | | | | | help | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:10:17 EDT | Thu, 15 Apr 2021 16:10:17 EDT | | start | --help | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:10:45 EDT | Thu, 15 Apr 2021 16:10:45 EDT | | delete | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:11:18 EDT | Thu, 15 Apr 2021 16:11:19 EDT | | start | --driver docker | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:11:22 EDT | Thu, 15 Apr 2021 16:13:42 EDT | | addons | enable gcp-auth | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:13:56 EDT | Thu, 15 Apr 2021 16:14:18 EDT | | docker-env | --shell none -p minikube | minikube | michihara | v1.18.1 | Thu, 15 Apr 2021 16:14:47 EDT | Thu, 15 Apr 2021 16:14:50 EDT | | docker-env | --shell none -p minikube | minikube | michihara | v1.18.1 | Thu, 15 Apr 2021 16:14:48 EDT | Thu, 15 Apr 2021 16:14:51 EDT | | docker-env | --shell none -p minikube | minikube | michihara | v1.18.1 | Thu, 15 Apr 2021 16:15:00 EDT | Thu, 15 Apr 2021 16:15:01 EDT | | docker-env | --shell none -p minikube | minikube | michihara | v1.18.1 | Thu, 15 Apr 2021 16:16:59 EDT | Thu, 15 Apr 2021 16:17:00 EDT | | pause | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:17:59 EDT | Thu, 15 Apr 2021 16:18:01 EDT | | start | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:18:03 EDT | Thu, 15 Apr 2021 16:18:22 EDT | | pause | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:18:30 EDT | Thu, 15 Apr 2021 16:18:31 EDT | | unpause | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:18:34 EDT | Thu, 15 Apr 2021 16:18:35 EDT | | docker-env | --shell none -p minikube | minikube | michihara | v1.18.1 | Thu, 15 Apr 2021 16:18:41 EDT | Thu, 15 Apr 2021 16:18:42 EDT | | pause | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:19:05 EDT | Thu, 15 Apr 2021 16:19:07 EDT | | unpause | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:19:08 EDT | Thu, 15 Apr 2021 16:19:09 EDT | | docker-env | --shell none -p minikube | minikube | michihara | v1.18.1 | Thu, 15 Apr 2021 16:19:15 EDT | Thu, 15 Apr 2021 16:19:16 EDT | | docker-env | --shell none -p minikube | minikube | michihara | v1.18.1 | Thu, 15 Apr 2021 16:21:18 EDT | Thu, 15 Apr 2021 16:21:20 EDT | | pause | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:21:49 EDT | Thu, 15 Apr 2021 16:21:50 EDT | | unpause | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:21:56 EDT | Thu, 15 Apr 2021 16:21:57 EDT | | docker-env | --shell none -p minikube | minikube | michihara | v1.18.1 | Thu, 15 Apr 2021 16:22:03 EDT | Thu, 15 Apr 2021 16:22:04 EDT | | addons | disable gcp-auth | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:23:07 EDT | Thu, 15 Apr 2021 16:23:19 EDT | | addons | enable gcp-auth | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:23:33 EDT | Thu, 15 Apr 2021 16:23:47 EDT | | docker-env | --shell none -p minikube | minikube | michihara | v1.18.1 | Thu, 15 Apr 2021 16:24:01 EDT | Thu, 15 Apr 2021 16:24:02 EDT | | stop | | minikube | michihara | v1.19.0 | Thu, 15 Apr 2021 16:24:39 EDT | Thu, 15 Apr 2021 16:24:52 EDT | | unpause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:33:57 EDT | Thu, 15 Apr 2021 16:33:59 EDT | | | cloud-run-dev-internal | | | | | | | addons | enable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:33:59 EDT | Thu, 15 Apr 2021 16:34:53 EDT | | | cloud-run-dev-internal | | | | | | | docker-env | --shell none -p | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:34:53 EDT | Thu, 15 Apr 2021 16:34:55 EDT | | | cloud-run-dev-internal | | | | | | | addons | list --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:35:26 EDT | Thu, 15 Apr 2021 16:35:26 EDT | | | cloud-run-dev-internal | | | | | | | | --output json | | | | | | | addons | disable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:35:26 EDT | Thu, 15 Apr 2021 16:36:14 EDT | | | cloud-run-dev-internal | | | | | | | pause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:36:14 EDT | Thu, 15 Apr 2021 16:36:16 EDT | | | cloud-run-dev-internal | | | | | | | docker-env | --shell none -p | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:36:53 EDT | Thu, 15 Apr 2021 16:36:55 EDT | | | cloud-run-dev-internal | | | | | | | unpause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:48:07 EDT | Thu, 15 Apr 2021 16:48:08 EDT | | | cloud-run-dev-internal | | | | | | | addons | enable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:48:09 EDT | Thu, 15 Apr 2021 16:48:21 EDT | | | cloud-run-dev-internal | | | | | | | docker-env | --shell none -p | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:48:22 EDT | Thu, 15 Apr 2021 16:48:23 EDT | | | cloud-run-dev-internal | | | | | | | addons | list --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:52:15 EDT | Thu, 15 Apr 2021 16:52:15 EDT | | | cloud-run-dev-internal | | | | | | | | --output json | | | | | | | addons | disable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:52:15 EDT | Thu, 15 Apr 2021 16:52:27 EDT | | | cloud-run-dev-internal | | | | | | | pause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 16:52:27 EDT | Thu, 15 Apr 2021 16:52:28 EDT | | | cloud-run-dev-internal | | | | | | | unpause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 18:00:40 EDT | Thu, 15 Apr 2021 18:00:42 EDT | | | cloud-run-dev-internal | | | | | | | addons | enable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 18:00:43 EDT | Thu, 15 Apr 2021 18:01:25 EDT | | | cloud-run-dev-internal | | | | | | | docker-env | --shell none -p | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 18:01:26 EDT | Thu, 15 Apr 2021 18:01:27 EDT | | | cloud-run-dev-internal | | | | | | | addons | list --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 18:02:09 EDT | Thu, 15 Apr 2021 18:02:09 EDT | | | cloud-run-dev-internal | | | | | | | | --output json | | | | | | | addons | disable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 18:02:09 EDT | Thu, 15 Apr 2021 18:02:21 EDT | | | cloud-run-dev-internal | | | | | | | pause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 18:02:21 EDT | Thu, 15 Apr 2021 18:02:22 EDT | | | cloud-run-dev-internal | | | | | | | unpause | --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 18:04:21 EDT | Thu, 15 Apr 2021 18:04:23 EDT | | | cloud-run-dev-internal | | | | | | | addons | enable gcp-auth --profile | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 18:04:23 EDT | Thu, 15 Apr 2021 18:04:36 EDT | | | cloud-run-dev-internal | | | | | | | docker-env | --shell none -p | cloud-run-dev-internal | michihara | v1.18.1 | Thu, 15 Apr 2021 18:04:37 EDT | Thu, 15 Apr 2021 18:04:38 EDT | | | cloud-run-dev-internal | | | | | | | profile | list | minikube | michihara | v1.18.1 | Thu, 15 Apr 2021 18:12:54 EDT | Thu, 15 Apr 2021 18:12:55 EDT | |------------|--------------------------------|------------------------|-----------|---------|-------------------------------|-------------------------------| ==> Last Start <== Log file created at: 2021/04/15 18:04:16 Running on machine: michihara-macbookpro Binary: Built with gc go1.16 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0415 18:04:16.693955 99240 out.go:239] Setting OutFile to fd 1 ... I0415 18:04:16.694463 99240 out.go:286] TERM=,COLORTERM=, which probably does not support color I0415 18:04:16.694468 99240 out.go:252] Setting ErrFile to fd 2... I0415 18:04:16.694472 99240 out.go:286] TERM=,COLORTERM=, which probably does not support color I0415 18:04:16.694818 99240 root.go:308] Updating PATH: /Users/michihara/.minikube/bin I0415 18:04:16.696340 99240 out.go:246] Setting JSON to false I0415 18:04:16.732606 99240 start.go:108] hostinfo: {"hostname":"michihara-macbookpro.roam.corp.google.com","uptime":853518,"bootTime":1617670738,"procs":579,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c86236b2-4976-3542-80ca-74a6b8b4ba03"} W0415 18:04:16.732736 99240 start.go:116] gopshost.Virtualization returned error: not implemented yet I0415 18:04:16.766249 99240 out.go:129] * minikube v1.18.1 on Darwin 11.2.3 I0415 18:04:16.768468 99240 driver.go:323] Setting default libvirt URI to qemu:///system I0415 18:04:16.944998 99240 docker.go:118] docker version: linux-20.10.5 I0415 18:04:16.946814 99240 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0415 18:04:17.242237 99240 info.go:253] docker info: {ID:FWI6:2C2J:XMKT:EMBV:E25N:URIR:6AXE:F7X6:YE5B:EVHZ:E6WV:3TXR Containers:35 ContainersRunning:19 ContainersPaused:0 ContainersStopped:16 Images:36 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:160 OomKillDisable:true NGoroutines:136 SystemTime:2021-04-15 22:04:17.1318178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:4127531008 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:}} I0415 18:04:17.287432 99240 out.go:129] * Using the docker driver based on existing profile I0415 18:04:17.287460 99240 start.go:276] selected driver: docker I0415 18:04:17.287789 99240 start.go:718] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:3888 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0415 18:04:17.288135 99240 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0415 18:04:17.620238 99240 info.go:253] docker info: {ID:FWI6:2C2J:XMKT:EMBV:E25N:URIR:6AXE:F7X6:YE5B:EVHZ:E6WV:3TXR Containers:35 ContainersRunning:19 ContainersPaused:0 ContainersStopped:16 Images:36 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:160 OomKillDisable:true NGoroutines:136 SystemTime:2021-04-15 22:04:17.4940288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:4127531008 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:}} I0415 18:04:17.626141 99240 start_flags.go:395] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:3888 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0415 18:04:17.687373 99240 out.go:129] * Starting control plane node minikube in cluster minikube I0415 18:04:17.904984 99240 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 in local docker daemon, skipping pull I0415 18:04:17.905664 99240 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 exists in daemon, skipping pull I0415 18:04:17.905686 99240 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker I0415 18:04:17.905732 99240 preload.go:105] Found local preload: /Users/michihara/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 I0415 18:04:17.905736 99240 cache.go:54] Caching tarball of preloaded images I0415 18:04:17.906035 99240 preload.go:131] Found /Users/michihara/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0415 18:04:17.906041 99240 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker I0415 18:04:17.906762 99240 profile.go:148] Saving config to /Users/michihara/.minikube/profiles/minikube/config.json ... I0415 18:04:17.908387 99240 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubelet.sha256 I0415 18:04:17.908394 99240 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubeadm.sha256 I0415 18:04:17.908387 99240 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubectl.sha256 I0415 18:04:17.908641 99240 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/darwin/amd64/kubectl.sha256 I0415 18:04:17.908666 99240 cache.go:185] Successfully downloaded all kic artifacts ```
medyagh commented 3 years ago

@sharifelgamal

sharifelgamal commented 3 years ago

The only way I can see this happening is if the addon itself can't find the GCP Project at runtime. It will fail silently and those env vars won't get set.

If you are sure the project is set (either via env var or via gcloud config set project) then try running minikube addons enable gcp-auth --refresh, which will be available in minikube 1.20. That was teardown and create your pods with the correct config.