Open StevenACoffman opened 7 years ago
The network performance test we use is probably not going to match what you see in iperf3. Currently it is a basic test to see how fast we can write/read from the socket.
Would you be able to share your results?
Sure, but I think I have too many messy variables in the stuff I've collected on AWS to be entirely coherent. I ran some tests in a cluster that was a Highly Available Private Cluster With Bastion (subdomain) (3 availability zones), and some in a Single Master Private Cluster, and I was playing with comparing how it was affected by using the experimental gossip-based clusters on some runs (as opposed to DNS). I hadn't been trying to be very scientific about it. Let me dig around a bit.
BTW, https://github.com/solarwinds/containers/blame/master/cnpt/README.md#L49 should be:
kubectl apply -f agent-daemonset.yaml
Ah, I found the checkbox for contribute telemetry back to you. Silly me. Also, @derhally do you have some recommended containers to put through their paces? I've just been picking somewhat randomly from whatever I happen to have running. Few of them pick up the endpoint ok.
@StevenACoffman thanks for pointing out the readme mismatch. It's updated now.
@derhally @leecalcote Here is some more iperf3 results in AWS using kops for kubernetes clusters.
@StevenACoffman The only recommendation I can give you is picking ones that you know that have access to each other. We still need to improve the tool to help you pick or test two containers to make sure they are accessible to one another.
Also due to a bug, containers that use the host network will not work at the moment. So you definitely need to choose a container that has a bridge or overlay network.
Thanks for providing those stats. Very interesting stuff to look over and digest.
@derhally Glad it helped! Please consider adding example or recommended container here (or better yet a deployment yaml + kubectl) to reduce the barrier for people to provide you with accurate/comparable statistics.
For instance, from here:
# Create a bootstrap master
kubectl create -f examples/redis/redis-master.yaml
# Create a service to track the sentinels
kubectl create -f examples/redis/redis-sentinel-service.yaml
# Create a replication controller for redis servers
kubectl create -f examples/redis/redis-controller.yaml
# Create a replication controller for redis sentinels
kubectl create -f examples/redis/redis-sentinel-controller.yaml
# Scale both replication controllers
kubectl scale rc redis --replicas=3
kubectl scale rc redis-sentinel --replicas=3
# Delete the original master pod
kubectl delete pods redis-master
This was a fascinating tool, as we are trying to determine which CNI networking to standardize on (weave, flannel, calico, canal, etc.) in our kubernetes clusters on AWS.
However, we are seeing some different results using cnpt compared to this analysis using iperf3 . I know it's a bit of an apples to oranges comparison, but I wondered if you had any thoughts or recommendations as this point.