Closed grkvlt closed 8 years ago
Tested this on Blue Box and SoftLayer by deploying clusters of various sizes, and terminating services on the master node, then verifying that a new master is created to replace the failing one. Also deployed the all-in-one guestbook, and verified that the dashboard and prometheus were working as expected.
Tested the k8s cluster deployment with @mikezaccardo in bluebox LON no problem, killed a service on the master node so that ServiceReplacer could take over the one on-fire. Deployed all-in-one guestbook that worked after the master replacement as expected.
Race condition for the masters at:
if ! etcdctl --peers ${ETCD_ENDPOINTS} get ${FLANNEL_ETCD_PREFIX}/config ; then
etcdctl --peers ${ETCD_ENDPOINTS} mk ${FLANNEL_ETCD_PREFIX}/config < flannel.json
fi
@neykov you're right about the race, but it's not specific to this. will fix in a separate PR
@mikezaccardo I rebased this with the latest master and on top of #348 so could you please re-test?
Multi-master doesn't work with the kunernetes-location
because it's trying to ssh into the haproxy address. Not clear how to fix - can we do something about it here (seems not)? Can we add logic in the docker-location
to work around?
@neykov -- deployments still work when k8s-specific blueprints are used, example:
location:
kubernetes:
endpoint: "https://192.168.99.100:8443/"
identity: "test"
credential: "test"
services:
- type: io.cloudsoft.amp.container.kubernetes.entity.KubernetesPod
brooklyn.children:
- type: io.cloudsoft.amp.containerservice.dockercontainer.DockerContainer
id: wordpress-mysql
brooklyn.config:
docker.container.imageName: mysql:5.6
docker.container.inboundPorts: [ "3306" ]
env: { MYSQL_ROOT_PASSWORD: "password" }
provisioning.properties:
kubernetes.deployment: wordpress-mysql
- type: io.cloudsoft.amp.containerservice.dockercontainer.DockerContainer
id: wordpress
brooklyn.config:
docker.container.imageName: wordpress:4.4-apache
docker.container.inboundPorts: [ "80" ]
env: { WORDPRESS_DB_HOST: "wordpress-mysql", WORDPRESS_DB_PASSWORD: "password" }
Merging it!
How did you solve the race condition reported above?
@neykov: @grkvlt said will solve this in a separate PR; is this correct @grkvlt ?
I got a failure due to it on my 1st or 2nd deployment of multi-master kubernetes. Should we hold on including multi-master capabilities until that's fixed?
Load balanced cluster of Kubernetes masters, fronted by HAProxy