Open eddytruyen opened 7 years ago
bboreham/weave2
Please don't use that. It was uploaded to demonstrate a bug in Docker, and is not intended for any other purpose. When we finish work on the swarm-compatible plugin we will announce it and publish it under the weaveworks
repository.
Thank you for posting such a detailed report. It will take some time to digest.
@bboreham Is there already a date for the release of the swarm-compatible plugin?
@marccarre thanks for the update!
I installed weave in kubernetes via the CNI plugin and in docker swarm integrated mode via the V2 docker plugin bboreham/weave2 In both setups I experienced problems with installing weave correctly. Moreover the Weave docker plugin exhibits a lower performance in comparison to other networks when running a YCSB benchmark workload on a single mongo service. Finally, Nodeports in docker swarm are always routed via the default ingress network of type overlay, regardless of the network chosen by the application deployer
I hope the following findings and conclusions are helpful to improving your ongoing work.
1) Kubernetes deployment problems I first setup kube cluster on ubuntu xenial VMs using the kubeadm tool and using the flannel CNI plugin. Thereafter I removed the flannel CNI plugin and installed the weave daemonsets using its CNI Yaml file. This deployment was performed without errors. However, opening a connection to a mongo service via the clusterIP address of the mongo-service did not work.
iptables --list
showed that rules for flannel were still active. I rebooted each VM and then connection to the mongo service via Weave worked correctly2) Docker Swarm deployment problems. I set docker swarm cluster by installing docker-engine version 17.03.0-ce, build 60ccb22. I installed weave plugin as follows:
I was able to ping to a mongo-service. However, as similar to Kubernetes set up, I was not able to connect to a remote mongo service using the cluster IP address from another mongo container, (e.g. mongo -host did not work.
The key to solving the above problem is to run
weave expose
at each node.3) Performance comparison
3.1) I compared four mongo-service experiments a) Mongo-service deployed in above kube-adm setup without a node port b) Mongo-service deployed in above kube-adm setup with a nodeport c) Mongo-service deployed in above docker swarm setup
c.1) Connect to cluster-IP address c.1). Connect to nodeport (30000)
d) Mongo service deployed in docker swarm as in setup (c) but connected to a default overlay network
d.1) Connect to cluster IP address d.2). Connect to nodeport (30000)
3,2 Experiment: I ran the following simple YCSB workload from https://github.com/brianfrankcooper/YCSB
All setups were deployed on three openstack VMs in an openstack private cloud. All three openstack VMS are deployed on the same set of three phyiscal machines, connected by a high speed network. Each VM has 2 virtual cpus that are pinned to exclusive physical cores of the same socket, hyperthreading is enabled. Each VM has 4GB RAM
To ensure uniformity, nodeSelectors in Kubernetes and constraints in Docker Swarm are used to ensure that all setups are executed on the exact same node topology:
3.3) Results for average, 95th and 99th percentile (ms):
3.4) Findings c.1 (docker swarm + weave, mongo-service invoked via a cluster IP) shows deteriorated performance. All other setups are consistent.
3.5) Threats to validity
4) Conclusion Docker weave plugin has a lower performance than Kube weave plugin for cluster IP connections
Nodeports in setup with Docker Swarm and Weave are connected to the default ingress network of type overlay. This is confirrmed by the following obsservation: when I inspected the mongo-service in setup c it showed that the mongo-service has 2 endpoints, one connected to ingress and one connected to the weave network