metral / corekube

CoreOS + Kubernetes + OpenStack - The simplest way to deploy a POC Kubernetes cluster using a Heat template
Apache License 2.0
7 stars 0 forks source link

timeout for overlord and discovery #24

Closed v1k0d3n closed 8 years ago

v1k0d3n commented 8 years ago

first, i wanted to say that this is a great project you have here. i work with your folks on the openstack-ansible side, and you rackspace folks are awesome with the community!

i'm running into a bit of an issue with overlord and discovery timing out, and i was wondering if this should work for private installs of openstack-ansible as well? i was trying to load the openstack.yml file without much luck, since CoreOS was reporting failed units and other things. this would be a huge win, since i'd like to demo some things around coreos for our folks internally. any ideas what could be causing the issues? if you need logs or more, just tell me what to grab and i will provide it for you.

thanks for everything, including the awesome project!

metral commented 8 years ago

Thank you for the kind words and the report that you're experiencing a timeout.

Could you please specify where you're in the process that you're seeing the timeout, as well as, include any logs or information that could help me recreate the problem?

The corekube-openstack.yaml stack has been tested & does work for a private install via openstack-ansible, but the system used is based on an older deployment of it which should be updated; nevertheless, the OpenStack components used are Glance, Nova, Neutron and Heat, so as long as those all functional there shouldn't be a reason for any timeout.

v1k0d3n commented 8 years ago

so it seems like the cluster is up and operational. i may have created this issue a little too quickly, or as things were still building out. i read more about the overload node, and i think i get it a little better now. thanks for the quick reply, sorry for not getting back to you sooner. i have some questions about your thoughts on SDN (related to CNI) and service exposure, but I may just try to hit you up on k8s Slack if you don't mind? great job...nice, clean deployment/repo!