Open miekg opened 7 years ago
Bare machine performance: running CoreDNS with: ./coredns -dns.port=10053
make TIME=1 SERVER=147.75.204.217 PORT=10053 queryperf
Queries per second: 31696.025203 qps
Now do a port forward with iptables (to mimic kube-proxy):
root@node1:~# iptables -t nat -A PREROUTING -p tcp --dport 1053 -j REDIRECT --to-port 10053
root@node1:~# iptables -t nat -A PREROUTING -p udp --dport 1053 -j REDIRECT --to-port 10053
And hit port 1053 Hovers around: 26-30 Kqps, i.e.:
Queries per second: 29693.303767 qps
Queries per second: 26807.221088 qps
Now hitting the pod running CoreDNS w/ flannel and docker:
root@master:~/perf-tests/local-perf# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
coredns-42r2w 1/1 Running 0 32m 10.244.2.2 node3
coredns-k87h8 1/1 Running 0 32m 10.244.3.2 node2
Testing:
make TIME=1 SERVER=10.244.2.2 PORT=53 queryperf
Queries per second: 19247.481316 qps
Queries per second: 18496.703226 qps
Starting a docker container the sets up port forwarding:
root@node1:~# docker run -p 147.75.204.217:1054:1054/udp coredns/coredns:007 --dns.port 1054
.:1054
2017/05/16 15:43:58 [INFO] CoreDNS-007
CoreDNS-007
Testing from the other server:
Queries per second: 19379.546765 qps
Queries per second: 20216.806338 qps
This is not using flannel, because I'm hitting the docker forwarded port directly.
Conclusion(?) once it hits docker we loose 40% performance?
A type0 server is used: this is currently:
4 Physical Cores @ 2.4 GHz
(1 × Atom C2550)
8 GB of DDR3 RAM
80 GB of SSD
(1 × 80 GB)
1Gbps Network
(2 × 1Gbps w/ TLB)
cc @johnbelamaric 40% of performance is eaten by docker....
@miekg It's hard to tell, are you hitting the forwarded port from the same host?
Wow, that does seem excessive. I'll ask the question on the docker-networking slack, see if someone there has any insight.
@miekg Try it from a different host if you are not already - @cpuguy83 mentioned in the slack channel that local traffic will go through a user space proxy for port forwarding.
It's via a remote host, master to a node in the cluster. No local traffic. But still smells like userspace proxy.
On 16 May 2017 5:30 pm, "John Belamaric" notifications@github.com wrote:
@miekg https://github.com/miekg Try it from a different host if you are not already - @cpuguy83 https://github.com/cpuguy83 mentioned in the slack channel that local traffic will go through a user space proxy for port forwarding.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/coredns/perf-tests/issues/9#issuecomment-301838207, or mute the thread https://github.com/notifications/unsubscribe-auth/AAVkW4c3IuwJ4opYY-9S1vLKT35ioR5zks5r6c8HgaJpZM4NcqHZ .
@miekg Can you try hitting the bridge IP directly?
Sure, but how do you do that remotely?
On 16 May 2017 5:36 pm, "Brian Goff" notifications@github.com wrote:
@miekg https://github.com/miekg Can you try hitting the bridge IP directly?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/coredns/perf-tests/issues/9#issuecomment-301839946, or mute the thread https://github.com/notifications/unsubscribe-auth/AAVkW84A4V2V9NRaIRJGpO1C1aKF3JH8ks5r6dB4gaJpZM4NcqHZ .
@miekg Setup a static route.
e.g. route add -net <default bridge net> <remote host IP>
Hmm, not sure if I follow:
root@master:~# route add -net 172.17.0.0/16 147.75.204.217
SIOCADDRT: No such device
Where the .217 is node1 where (in some form of another) CoreDNS will run. But both master and node1 run docker (because k8s cluster).
docker version: 1.11.2-0~xenial
see https://github.com/moby/moby/issues/7857 as well
Sorry, different versions of route
behavior differently.
Try:
# ip route add 172.17.0.0/16 via 147.75.204.217
I'm doing some simple perf tests on a k8s cluster:
Cluster is running in packet. Nodes are type 0 machines.