Closed xanxys closed 8 years ago
I also checked the node's persistent disk performance by time sha256 (random 1MB file) by ssh-ing, but they were super fast
Maybe filesystem is not a problem, but maybe load balancer is incorrectly configured? When a response fits in a packet, they're fast as normal, but bigger responses are triggering strange behavior?
Also checked MTUs (cf. https://cloud.google.com/compute/docs/troubleshooting#communicatewithinternet) but they (i.e. GKE node itself and container inside it) had MTU 1460.
Also tested wget, sha256sum (probably provided by busybox) within the pod container, but they're fast. Another evidence PD is not a problem here.
Turning down one service temporarily
kubectl scale --replicas=0 rc/bonsai-prod-frontend-rc
didn't help speed of wget --no-cache -O /dev/null http://104.154.86.8/static/vue.js
so "load balancer ingress packet incorrectly handled by multiple different services" hypothesis is rejected.
Confirmed this is not related to PD, since adding garbage text to API proto and disabling gzip resulted in same kind of slow download.
similar case for AWS backed cluster https://github.com/kubernetes/kubernetes/issues/11632
Have no idea, but recreating all SVCs (along with LBs) solved the issue. Maybe older version of kubernetes / GKE had problem w/ LB setup or something.
Very strange TCP packets: ab -c 1 -n 1 "http://bonsai-staging.xanxys.net/static/vue.js"