Closed plambert closed 2 years ago
I have the same problem when update to the version 18.05.0-ce-mac67 (25042).
kubectl
commands even though Docker for mac is showing Kubernetes is starting...
.Preferences...
-> Kubernetes
.<pending>
) though I have waited for 20 min.
18.05.0-ce-mac66
) attached localhost
as External IP to LB<pending>
and cURL command like curl http://localhost/some-url
returns error: curl: (7) Failed to connect to localhost port 80: Connection refused
even though I could use this on the previous version.$ kgs
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
crazy-greyhound-nginx-ingress-controller LoadBalancer 10.96.142.184 <pending> 80:31157/TCP,443:30351/TCP 27m
crazy-greyhound-nginx-ingress-default-backend ClusterIP 10.103.196.198 <none> 80/TCP 27m
...
Diagnostic logs: 2E0B1819-92A4-408C-9548-B52A29AAF164/20180612-214950
Experiencing the same after upgrading to macOS 10.13.5. Also see #2985
I had exactly the same scenario and issue-- even rebooting didn't help. Simply doing the "Reset Kubernetes cluster" on the Reset tab resolved the issue for me. (I didn't have to go so far as to reset to factory defaults.)
I didn't realize Reset was a tab! I thought it's a button that would reset everything at once (not just Kubernetes). Resetting the Kubernetes cluster has helped indeed, although I do hope that this is just a one time thing. I was happy to move from minikube to Docker for Mac primarily because minikube releases broke the Kubernetes cluster often. Hopefully this will not be the case with Docker for Mac in the future.
"Reset Kubernetes cluster" also worked for my case! Thank you so much 😆
the same problem here, kubernetes stuck in starting state
diagnostic ID 8A8F9D98-E405-4AB4-ABF8-B52AB26DBD61/20180619-104118
Running on OSX version 10.13.4 (17E202)
Docker for Mac: version: 18.05.0-ce-mac67 (1fa4e2acfc1a52f79623add2390604515d32297e)
macOS: version 10.13.4 (build: 17E202)
logs: /tmp/8A8F9D98-E405-4AB4-ABF8-B52AB26DBD61/20180619-104118.tar.gz
[OK] vpnkit
[OK] virtualization hypervisor
[OK] vmnetd
[OK] dns
[OK] driver.amd64-linux
[OK] virtualization VT-X
[OK] app
[OK] moby
[OK] system
[OK] moby-syslog
[OK] kubernetes
[OK] files
[OK] env
[OK] virtualization kern.hv_support
[OK] osxfs
[OK] moby-console
[OK] logs
[OK] docker-cli
[OK] disk
@plambert your logs have:
2018-06-11 10:57:46.097206-0700 localhost com.docker.driver.amd64-linux[10740]: Node is not ready: PIDPressure/False kubelet has sufficient PID available
2018-06-11 10:57:47.102135-0700 localhost com.docker.driver.amd64-linux[10740]: Node is not ready: PIDPressure/False kubelet has sufficient PID available
2018-06-11 10:57:48.097550-0700 localhost com.docker.driver.amd64-linux[10740]: Node is not ready: PIDPressure/False kubelet has sufficient PID available
2018-06-11 10:57:49.097725-0700 localhost com.docker.driver.amd64-linux[10740]: Node is not ready: PIDPressure/False kubelet has sufficient PID available
2018-06-11 10:57:50.102071-0700 localhost com.docker.driver.amd64-linux[10740]: Node is not ready: PIDPressure/False kubelet has sufficient PID available
and yet
$ /usr/local/bin/kubectl.docker --context docker-for-desktop get nodes
NAME STATUS ROLES AGE VERSION
docker-for-desktop Ready master 7d v1.10.3
and
$ /usr/local/bin/kubectl.docker --context docker-for-desktop describe nodes
Name: docker-for-desktop
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=docker-for-desktop
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Mon, 04 Jun 2018 09:36:58 -0700
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Mon, 11 Jun 2018 10:58:31 -0700 Mon, 04 Jun 2018 09:36:44 -0700 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Mon, 11 Jun 2018 10:58:31 -0700 Mon, 04 Jun 2018 09:36:44 -0700 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 11 Jun 2018 10:58:31 -0700 Mon, 04 Jun 2018 09:36:44 -0700 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Mon, 11 Jun 2018 10:58:31 -0700 Mon, 04 Jun 2018 09:36:44 -0700 KubeletReady kubelet is posting ready status
PIDPressure False Mon, 11 Jun 2018 10:58:31 -0700 Thu, 07 Jun 2018 22:30:59 -0700 KubeletHasSufficientPID kubelet has sufficient PID available
I notice that PIDPressure
has Status
False
and yet is mentioned in the main Docker log. I'll check the code which waits for the node ready state to see if it is confused.
I've fixed a bug in the code which failed to notice the node had become Ready
. The fix is in the latest development build which can be downloaded form here: https://download-stage.docker.com/mac/bysha1/98b468326d2c579b87b48c47b2ac7e66c1f0a282/Docker.dmg Note that this build is only suitable for testing -- not production. If you get a chance to try it, let me know how it goes. If it still fails, please upload a fresh set of diagnostics.
Hi @djs55 I have tried with my previous Docker.qcow2
image and started your version, I now have the kubernetes is running
back! Thanks!
hope it works for the others as well :)
@tjamet thanks a lot for the speedy confirmation!
For what it is worth, Kubernetes is now running successfully without your patch @djs55 on version 18.05.0-ce-mac67 (25042)
I have edge installed via Homebrew
. Here is what I did step-by-step:
homebrew
homebrew
Not sure why this works and how it doesn't 'require' your patch, but it something others might try if they need a 'production' build.
@djs55 I still have "Kubernetes is starting..." running your dmg build. Diagnostic ID: 29A56A58-476F-4D93-B4E0-B518C213C244/20180726-012938
I had the same issue, found out my /etc/hosts was empty. Once I restored it original values, Kubernetes is now starting successfully.
what worked for me is a reset of ALL data, then recreate it all.
I also have this issue. Kubernetes worked fine one time, but after I had to reboot, it got stuck in "Kubernetes is starting".
I tried these steps:
But it still won't start.
$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: EOF
$ kubectl cluster-info dump
Unable to connect to the server: EOF
Yep, getting the same thing. Latest release for macOS (stable). Will test edge too.
Failing on edge too, tried all means of getting it running (yes I uninstalled homebrew too). Moving back to minikube
for now.
Same as above: stuck in starting after updating to latest macOS release on both stable and edge channels.
I had it running fine on my mac but but after upgrading my OS to "High Sierra 10.13.6", running into same issue. Have already tried a few different stable and edge versions.
Same here, testing both versions stable/edge, OS HS 10.13.6
Same here as well, at first with the latest stable and now with Edge Version 2.0.0.0-beta1-mac75 (27117)
and High Sierra 10.13.6
.
same with macos 10.14 and 18.06.1-ce-mac73 the problem seems to be with etcd:
{"log":"2018-09-27 09:54:08.188030 I | embed: peerTLS: cert = /run/config/pki/etcd/peer.crt, key = /run/config/pki/etcd/peer.key, ca = , trusted-ca = /run/config/pki/etcd/ca.crt, client-cer
t-auth = true\n","stream":"stderr","time":"2018-09-27T09:54:08.188456442Z"}
{"log":"2018-09-27 09:54:08.188034 W | embed: The scheme of peer url http://localhost:2380 is HTTP while peer key/cert files are presented. Ignored peer key/cert files.\n","stream":"stderr"
,"time":"2018-09-27T09:54:08.188459911Z"}
{"log":"2018-09-27 09:54:08.188038 W | embed: The scheme of peer url http://localhost:2380 is HTTP while client cert auth (--peer-client-cert-auth) is enabled. Ignored client cert auth for
this url.\n","stream":"stderr","time":"2018-09-27T09:54:08.188463072Z"}
{"log":"2018-09-27 09:54:08.222240 C | etcdmain: listen tcp 172.30.105.230:2380: bind: cannot assign requested address\n","stream":"stderr","time":"2018-09-27T09:54:08.22338831Z"}
which does match the interfaces:
br-80b3dd052912 Link encap:Ethernet HWaddr 02:42:52:07:B0:BF
inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
br-e94472631828 Link encap:Ethernet HWaddr 02:42:3E:FD:B5:C0
inet addr:172.19.0.1 Bcast:172.19.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
docker0 Link encap:Ethernet HWaddr 02:42:65:D8:4C:63
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:65ff:fed8:4c63/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:928 (928.0 B)
eth0 Link encap:Ethernet HWaddr 02:50:00:00:00:01
inet addr:192.168.65.3 Bcast:192.168.65.255 Mask:255.255.255.0
inet6 addr: fe80::50:ff:fe00:1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:474 errors:0 dropped:0 overruns:0 frame:0
TX packets:487 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:42732 (41.7 KiB) TX bytes:41878 (40.8 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:61919 errors:0 dropped:0 overruns:0 frame:0
TX packets:61919 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:4733663 (4.5 MiB) TX bytes:4733663 (4.5 MiB)
I found a solution that worked for me, don't change default Docker subnet 192.168.65.0/24
Version 18.06.1-ce-mac73 (26764) Version 10.14 (18A391)
Encountered the same problem. Tried "Reset Kubernetes cluster", and again and again until in a sudden while showing "Kubernetes is stopped", quickly switch to Kubernetes tab and uncheck "Enable Kubernetes" and click Apply button, then kubernetes no longer started. If switching tab is not quick enough, apply button would be disabled, in this case consider any auto mouse software such as Auto Hot Key.
@djs55 Is there another download link for a nightly/dev build that we can try out? The link you provided earlier in the thread is no longer valid.
Same problem. Tried rebooting, Resetting the Kubernetes cluster, and Reset to factory defaults.
Diagnose ID EF994FC0-4D79-464C-8814-F3A55010FB09/20181028052021
$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: EOF
$ kubectl cluster-info dump
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
@suncoma thanks for mentioning the Docker subnet -- I investigated and it turns out that the default IP (192.168.65.3) is hardcoded inside the kubernetes cluster API server's certificate. There's a fix in the development branch -- the latest build can be downloaded from https://download-stage.docker.com/mac/master/Docker.dmg
With the fixed build, it should be possible to change the default IP and then install Kubernetes. Note that if the IP is changed after installing Kubernetes, it will break the cluster as the old IP will still be in the certificate and we can't change the certificate of a live cluster. So it's best to change the IP first and install Kubernetes second.
@aditzel @johntellsall could you try the latest development build and let me know if it is any better? If not, could you upload a set of diagnostics and add a comment here with the diagnostics id.
Thanks everyone for your patience!
@djs55, unfortunately the same thing is still happening. I uploaded the following diagnostic log: 44C2BA4A-359B-45EE-9315-D5BA4794176A/20181029221406
Same issue on Mac OS X Sierra, without changing default IP's. One day it worked, the other it's stuck in "starting" state. Maybe it makes sense to mark tis feature as experimental
if so many users on different versions have issues? It would save us days if we would decide to go for minukube
directly...
@marek-obuchowicz could you upload a set of diagnostics when it's stuck in the "starting" state and quote the diagnostic id here? Thanks!
@djs55 sure: D8C4C1BC-B3EE-42E6-9875-74843E9A55E7/20181105105431
Same issue for me on 10.14, Docker. Version 18.06.1-ce-mac73 (26764). MacOS console logs repeating this over and over:
default 12:12:27.948105 -0500 com.docker.driver.amd64-linux Error while setting up kubernetes: cannot expose the external port: cannot connect to vpnkit: Failed to connect to vpnkit: write unix ->vpnkit.port.sock: write: broken pipe. Retrying in 1s
default 12:12:27.948102 -0500 com.docker.supervisor com.docker.driver.amd64-linux (pid: 47301): stderr: 2018/11/05 12:12:27 Failed to establish 9P session to write unix ->vpnkit.port.sock: write: broken pipe
default 12:12:28.079266 -0500 com.docker.vpnkit DNS lookup docker-for-desktop.<redacted domain name> A: NoSuchRecord
Diagnostics uploaded: 7550C626-0ECC-4A00-BD31-D5BEE50BF2EB/20181105170302
UPDATE: I quit out of the Docker app, and confirmed that the MacOS console logs stopped complaining. On a hunch, I then removed my ~/.kube directory. Re-launched Docker app, and k8s started fine.
I've just noticed that my workstation is very slow, top showed that 250% of CPU is used by hyperkit
process. No docker containers were running, except kubernetes containers (while stuck in "Kubernetes is starting..."). It seems that k8s not being able to start still consumes lots of cpu resources.
Still present in 2.0.0.0-mac77 (28700). Diagnostic log: 44C2BA4A-359B-45EE-9315-D5BA4794176A/20181109181537
I ran into this issue after using the migration assistant with a new mac. Was seeing i/o timeout
errors in the console. Turns out my /etc/hosts
file was completely empty - I believe there's a bug in the migration assistant. Adding an entry for localhost
and resetting the kubernetes cluster fixed the issue.
@lexiross Did you mean the /etc/hosts
on your Mac itself (mine seems intact) or somewhere else?
@MartinEmrich the one on my mac. Pretty sure as a result of the migration assistant because my coworker had the same issue after doing a migration.
Hmm ok. Mine was ok, so there seems to be more than one cause for this issue.
The "reset kubernetes cluster" idea above worked for me on MacOS Mojave. It just took a while to see it actually work, probably about 5 minutes.
Tried again after upgrading to Mojave, and again after resetting the cluster, but still same as before. Diagnostic log: 44C2BA4A-359B-45EE-9315-D5BA4794176A/20181127203949
Same here.
MacOS Version: 10.13.6 (Sierra) Docker version: 18.09.0
I fixed it.
Add the following lines in /etc/hosts
.
127.0.0.1 localhost
::1 localhost
pred='process matches ".*(ocker|vpnkit).*"
|| (process in {"taskgated-helper", "launchservicesd", "kernel"} && eventMessage contains[c] "docker")'
/usr/bin/log stream --style syslog --level=debug --color=always --predicate "$pred"
Analyze the log: Found
2018-11-30 08:33:37.468984+0300 localhost com.docker.driver.amd64-linux[38047]: Cannot list nodes: Get https://localhost:6443/api/v1/nodes: dial tcp: lookup localhost: no such host
It means, it cannot resolve localhost
???!!
Check /etc/hosts
Oops! it is empty.
# cat /etc/hosts
127.0.0.1 localhost
::1 localhost
pred='process matches ".*(ocker|vpnkit).*"
|| (process in {"taskgated-helper", "launchservicesd", "kernel"} && eventMessage contains[c] "docker")'
/usr/bin/log stream --style syslog --level=debug --color=always --predicate "$pred"
🎉 It works. k8s is now up and running with docker-for-mac.
@abdennour my hosts file had these entries... But thanks for your log command!
I get these:
2018-11-30 14:32:37.640289+0100 localhost com.docker.driver.amd64-linux[724]: Cannot list nodes: Get https://localhost:6443/api/v1/nodes: EOF
But if I try it myself, something is listening there:
$ curl -k https://localhost:6443/api/v1/nodes
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "nodes is forbidden: User \"system:anonymous\" cannot list nodes at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "nodes"
},
"code": 403
@MartinEmrich thanks for your update. I suspect the earlier EOF
error is expected, because the host software polls localhost:6443
for the kubernetes API, and I would expect an EOF
from the port forwarder until the API server binds the port in the container. We should probably stop logging these "expected" errors.
The curl
is interesting -- it looks like the API server is responding correctly. Could you try:
$ kubectl --context docker-for-desktop get nodes
NAME STATUS ROLES AGE VERSION
docker-for-desktop Ready master 1d v1.11.2
-- sometimes the reason kubectl
commands fail is because the wrong context is being used, so it's good to be specific.
@djs55 I get this:
$ kubectl --context docker-for-desktop get nodes
Unable to connect to the server: EOF
and
$ curl -k https://localhost:6443/api/v1/nodes
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to localhost:6443
Looks like it died a few minutes after my last post.
@MartinEmrich interesting. Could you upload a fresh set of diagnostics in this broken state (Whale menu -> Diagnose and Feedback ... ) and quote the diagnostics id here?
@djs55 here: 1568AC57-833C-44EA-95C7-C2200DDB82FA/20181130140947
@MartinEmrich Looking in the VM I see the kube-apiserver
process is not running and the log has:
2018-11-30T13:36:27Z kubelet I1130 13:36:27.431522 2208 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-docker-for-desktop_kube-system(8eaf08e4c69b828fe24b8db74500818c)"
It would be interesting to see what the kube-apiserver
log contains. Unfortunately it's not in the diagnostics but you could see it by:
docker ps | grep apiserver
For example
$ docker ps | grep kube-apiserver
06bb0bba26de 821507941e9c "kube-apiserver --au…" 28 hours ago Up 28 hours k8s_kube-apiserver_kube-apiserver-docker-for-desktop_kube-system_5c2b54130afb67a75c140a676d14ac3b_0
cfa5a3f38aec k8s.gcr.io/pause:3.1 "/pause" 28 hours ago Up 28 hours k8s_POD_kube-apiserver-docker-for-desktop_kube-system_5c2b54130afb67a75c140a676d14ac3b_0
Find the id of the container which is "kube-apiserver --au..." (i.e. not "/pause") and run docker logs
:
$ docker logs 06bb0bba26de
...
I1130 15:34:29.963328 1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.compose.docker.com
I1130 15:34:29.964746 1 controller.go:116] OpenAPI AggregationController: action for item v1beta1.compose.docker.com: Requeue.
Let me know what the log says. Hopefully it is quite short and has an interesting error in it.
My /etc/hosts
file has the appropriate entries for IPV4 and IPV6 local host.
I tried resetting k8s this morning and now I too am getting the EOF error when getting nodes.
Running docker ps | grep apiserver
I see the following output:
5a4b519e3767 k8s.gcr.io/pause-amd64:3.1 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-apiserver-docker-for-desktop_kube-system_91c24f27d8a560dbbc78da739673a1d3_0
Looking at my logs I'm seeing the following error appear:
2018-11-30 13:42:27.549156-0500 localhost com.docker.driver.amd64-linux[6141]: Cannot list nodes: Get https://localhost:6443/api/v1/nodes: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""
Just in case I've submitted the following diagnostic log: 44C2BA4A-359B-45EE-9315-D5BA4794176A/20181130185105
@djs55 This looks not healthy:
$ docker logs 3a928a143c7a
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I1204 12:02:40.996111 1 server.go:135] Version: v1.10.3
I1204 12:02:40.996332 1 server.go:724] external host was not specified, using 192.168.65.3
I1204 12:02:41.508680 1 feature_gate.go:190] feature gates: map[Initializers:true]
I1204 12:02:41.508759 1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I1204 12:02:41.509182 1 plugins.go:149] Loaded 10 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers,ValidatingAdmissionWebhook,ResourceQuota.
I1204 12:02:41.510268 1 plugins.go:149] Loaded 10 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers,ValidatingAdmissionWebhook,ResourceQuota.
I1204 12:02:41.514804 1 master.go:228] Using reconciler: master-count
W1204 12:02:41.603034 1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W1204 12:02:41.611325 1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1204 12:02:41.613105 1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1204 12:02:41.622808 1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/12/04 12:02:41 log.go:33: [restful/swagger] listing is available at https://192.168.65.3:6443/swaggerapi
[restful] 2018/12/04 12:02:41 log.go:33: [restful/swagger] https://192.168.65.3:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/12/04 12:02:43 log.go:33: [restful/swagger] listing is available at https://192.168.65.3:6443/swaggerapi
[restful] 2018/12/04 12:02:43 log.go:33: [restful/swagger] https://192.168.65.3:6443/swaggerui/ is mapped to folder /swagger-ui/
I1204 12:02:43.069126 1 plugins.go:149] Loaded 10 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers,ValidatingAdmissionWebhook,ResourceQuota.
I1204 12:02:46.544585 1 serve.go:96] Serving securely on [::]:6443
I1204 12:02:46.544922 1 controller.go:84] Starting OpenAPI AggregationController
I1204 12:02:46.546722 1 crd_finalizer.go:242] Starting CRDFinalizer
I1204 12:02:46.547567 1 apiservice_controller.go:90] Starting APIServiceRegistrationController
I1204 12:02:46.547627 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1204 12:02:46.547728 1 available_controller.go:262] Starting AvailableConditionController
I1204 12:02:46.547901 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1204 12:02:46.548914 1 crdregistration_controller.go:110] Starting crd-autoregister controller
I1204 12:02:46.548988 1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I1204 12:02:46.564884 1 customresource_discovery_controller.go:174] Starting DiscoveryController
I1204 12:02:46.565113 1 naming_controller.go:276] Starting NamingConditionController
nb-343:~ emric05$ docker logs 3a928a143c7a -f
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I1204 12:02:40.996111 1 server.go:135] Version: v1.10.3
I1204 12:02:40.996332 1 server.go:724] external host was not specified, using 192.168.65.3
I1204 12:02:41.508680 1 feature_gate.go:190] feature gates: map[Initializers:true]
I1204 12:02:41.508759 1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I1204 12:02:41.509182 1 plugins.go:149] Loaded 10 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers,ValidatingAdmissionWebhook,ResourceQuota.
I1204 12:02:41.510268 1 plugins.go:149] Loaded 10 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers,ValidatingAdmissionWebhook,ResourceQuota.
I1204 12:02:41.514804 1 master.go:228] Using reconciler: master-count
W1204 12:02:41.603034 1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W1204 12:02:41.611325 1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1204 12:02:41.613105 1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1204 12:02:41.622808 1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/12/04 12:02:41 log.go:33: [restful/swagger] listing is available at https://192.168.65.3:6443/swaggerapi
[restful] 2018/12/04 12:02:41 log.go:33: [restful/swagger] https://192.168.65.3:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/12/04 12:02:43 log.go:33: [restful/swagger] listing is available at https://192.168.65.3:6443/swaggerapi
[restful] 2018/12/04 12:02:43 log.go:33: [restful/swagger] https://192.168.65.3:6443/swaggerui/ is mapped to folder /swagger-ui/
I1204 12:02:43.069126 1 plugins.go:149] Loaded 10 admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers,ValidatingAdmissionWebhook,ResourceQuota.
I1204 12:02:46.544585 1 serve.go:96] Serving securely on [::]:6443
I1204 12:02:46.544922 1 controller.go:84] Starting OpenAPI AggregationController
I1204 12:02:46.546722 1 crd_finalizer.go:242] Starting CRDFinalizer
I1204 12:02:46.547567 1 apiservice_controller.go:90] Starting APIServiceRegistrationController
I1204 12:02:46.547627 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1204 12:02:46.547728 1 available_controller.go:262] Starting AvailableConditionController
I1204 12:02:46.547901 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1204 12:02:46.548914 1 crdregistration_controller.go:110] Starting crd-autoregister controller
I1204 12:02:46.548988 1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I1204 12:02:46.564884 1 customresource_discovery_controller.go:174] Starting DiscoveryController
I1204 12:02:46.565113 1 naming_controller.go:276] Starting NamingConditionController
I1204 12:02:56.568237 1 trace.go:76] Trace[1834060020]: "Create /api/v1/namespaces/default/events" (started: 2018-12-04 12:02:46.567633011 +0000 UTC m=+5.652092019) (total time: 10.000497474s):
Trace[1834060020]: [10.000497474s] [10.000442469s] END
I1204 12:02:57.802843 1 trace.go:76] Trace[1862380376]: "Create /api/v1/nodes" (started: 2018-12-04 12:02:47.801504988 +0000 UTC m=+6.885964008) (total time: 10.001204656s):
Trace[1862380376]: [10.001204656s] [10.00088309s] END
I1204 12:03:06.572713 1 trace.go:76] Trace[991139568]: "Create /api/v1/namespaces/default/events" (started: 2018-12-04 12:02:56.572394326 +0000 UTC m=+15.656769225) (total time: 10.000299066s):
Trace[991139568]: [10.000299066s] [10.000244562s] END
I1204 12:03:14.815398 1 trace.go:76] Trace[1458927364]: "Create /api/v1/nodes" (started: 2018-12-04 12:03:04.814735059 +0000 UTC m=+23.899109957) (total time: 10.000642245s):
Trace[1458927364]: [10.000642245s] [10.000505777s] END
I1204 12:03:16.576340 1 trace.go:76] Trace[804207738]: "Create /api/v1/namespaces/default/events" (started: 2018-12-04 12:03:06.575891339 +0000 UTC m=+25.660266236) (total time: 10.000424758s):
Trace[804207738]: [10.000424758s] [10.000328198s] END
I1204 12:03:26.580012 1 trace.go:76] Trace[342478048]: "Create /api/v1/namespaces/default/events" (started: 2018-12-04 12:03:16.579358329 +0000 UTC m=+35.663733225) (total time: 10.000488706s):
Trace[342478048]: [10.000488706s] [10.000131462s] END
I1204 12:03:31.824845 1 trace.go:76] Trace[983410702]: "Create /api/v1/nodes" (started: 2018-12-04 12:03:21.823502735 +0000 UTC m=+40.907877642) (total time: 10.001198469s):
Trace[983410702]: [10.001198469s] [10.001076295s] END
I1204 12:03:36.582733 1 trace.go:76] Trace[25081042]: "Create /api/v1/namespaces/default/events" (started: 2018-12-04 12:03:26.582191016 +0000 UTC m=+45.666459243) (total time: 10.000504021s):
Trace[25081042]: [10.000504021s] [10.000466406s] END
I1204 12:03:46.589965 1 trace.go:76] Trace[2011483126]: "Create /api/v1/namespaces/default/events" (started: 2018-12-04 12:03:36.588551677 +0000 UTC m=+55.672819926) (total time: 10.001373055s):
Trace[2011483126]: [10.001373055s] [10.001304788s] END
E1204 12:03:46.646265 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.VolumeAttachment: the server was unable to return a response in the time allotted, but may still be processing the request (get volumeattachments.storage.k8s.io)
E1204 12:03:46.646392 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces)
E1204 12:03:46.646439 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:74: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io)
E1204 12:03:46.649656 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ValidatingWebhookConfiguration: the server was unable to return a response in the time allotted, but may still be processing the request (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E1204 12:03:46.651553 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.MutatingWebhookConfiguration: the server was unable to return a response in the time allotted, but may still be processing the request (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E1204 12:03:46.651732 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E1204 12:03:46.654044 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.ClusterRole: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
E1204 12:03:46.654274 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes)
E1204 12:03:46.654380 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:74: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io)
E1204 12:03:46.654505 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.ServiceAccount: the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts)
E1204 12:03:46.654878 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.LimitRange: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)
E1204 12:03:46.655227 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
E1204 12:03:46.657776 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.RoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io)
E1204 12:03:46.657857 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E1204 12:03:46.657900 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.ClusterRoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io)
E1204 12:03:46.657938 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *rbac.Role: the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io)
E1204 12:03:46.658385 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *storage.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io)
E1204 12:03:46.658455 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.ResourceQuota: the server was unable to return a response in the time allotted, but may still be processing the request (get resourcequotas)
E1204 12:03:46.658585 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets)
E1204 12:03:47.553195 1 storage_rbac.go:157] unable to initialize clusterroles: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io)
I1204 12:03:48.840095 1 trace.go:76] Trace[432238868]: "Create /api/v1/nodes" (started: 2018-12-04 12:03:38.837796765 +0000 UTC m=+57.922064978) (total time: 10.002245382s):
Trace[432238868]: [10.002245382s] [10.00214726s] END
I1204 12:03:57.568368 1 trace.go:76] Trace[1567924272]: "Create /api/v1/namespaces" (started: 2018-12-04 12:03:47.567159779 +0000 UTC m=+66.651428002) (total time: 10.000765853s):
Trace[1567924272]: [10.000765853s] [10.00073227s] END
E1204 12:03:57.569517 1 client_ca_hook.go:78] namespaces "kube-system" is forbidden: not yet ready to handle request
I1204 12:04:05.855268 1 trace.go:76] Trace[2131903137]: "Create /api/v1/nodes" (started: 2018-12-04 12:03:55.853158341 +0000 UTC m=+74.937006316) (total time: 10.002062949s):
Trace[2131903137]: [10.002062949s] [10.001932799s] END
E1204 12:04:07.913178 1 customresource_discovery_controller.go:177] timed out waiting for caches to sync
I1204 12:04:07.913275 1 customresource_discovery_controller.go:178] Shutting down DiscoveryController
I1204 12:04:07.913314 1 naming_controller.go:280] Shutting down NamingConditionController
E1204 12:04:07.913330 1 controller_utils.go:1022] Unable to sync caches for crd-autoregister controller
E1204 12:04:07.913580 1 cache.go:35] Unable to sync caches for AvailableConditionController controller
E1204 12:04:07.913617 1 cache.go:35] Unable to sync caches for APIServiceRegistrationController controller
I1204 12:04:07.913627 1 crd_finalizer.go:246] Shutting down CRDFinalizer
I1204 12:04:07.913870 1 serve.go:136] Stopped listening on [::]:6443
I1204 12:04:07.914038 1 controller.go:90] Shutting down OpenAPI AggregationController
I1204 12:04:07.917862 1 crdregistration_controller.go:115] Shutting down crd-autoregister controller
I1204 12:04:07.919726 1 available_controller.go:266] Shutting down AvailableConditionController
I1204 12:04:07.920979 1 apiservice_controller.go:94] Shutting down APIServiceRegistrationController
I1204 12:04:16.598243 1 trace.go:76] Trace[1643650985]: "Patch /api/v1/namespaces/default/events/docker-for-desktop.156d1f8744030d68" (started: 2018-12-04 12:03:46.596132337 +0000 UTC m=+65.680400567) (total time: 30.00164952s):
Trace[1643650985]: [30.00164952s] [30.001554943s] END
Expected behavior
Within a few minutes of starting Docker for Mac, Kubernetes should be available.
Actual behavior
After several hours, it is still 'starting…'
Information
Diagnostic logs
(Uploaded as 804B8977-06D2-4E2A-BB3E-10FBBA99D1F9/20180611-105749)
Steps to reproduce the behavior