Closed mpapovic closed 6 years ago
@mpapovic I cannot reproduce this times. Maybe you have some issues in (at least) one of your nodes?
Install the cluster using kops (1.7.1)
export MASTER_ZONES=us-west-2a,us-west-2b,us-west-2c
export WORKER_ZONES=us-west-2a,us-west-2b,us-west-2c
export KOPS_STATE_STORE=s3://k8s-xxxxxx-01
export AWS_DEFAULT_REGION=us-west-2
kops create cluster \
--name uswest2-01.rocket-science.io \
--cloud aws \
--master-zones $MASTER_ZONES \
--zones $WORKER_ZONES \
--master-size m3.medium \
--node-count 6 \
--node-size m3.xlarge \
--ssh-public-key ~/.ssh/id_rsa.pub \
--dns-zone domain.com \
--topology private \
--networking flannel \
--bastion="true" \
--authorization=RBAC \
--dns-zone=uswest2-01.rocket-science.io \
--yes
Install the echo headers deployment
kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/http-svc.yaml
kubectl scale deployment http-svc --replicas=10
Create the ingress rule:
echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-svc
spec:
rules:
- host: echoheaders.uswest2-01.rocket-science.io
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
" | kubectl create -f -
Install steps from the deploy guide:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml \
| kubectl apply -f -
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml
From my laptop:
echo "GET http://echoheaders.uswest2-01.rocket-science.io" | vegeta attack -duration=10s -rate=300 | tee results.bin | vegeta report
Requests [total, rate] 3000, 300.10
Duration [total, attack, wait] 18.514211235s, 9.996665401s, 8.517545834s
Latencies [mean, 50, 95, 99, max] 1.104870547s, 526.198874ms, 4.312779517s, 9.814004164s, 16.699943913s
Bytes In [total, mean] 2058000, 686.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:3000
Error Set:
@aledbf Ive tried same setup on my laptop with minikube the results are ±identical as yours.
@mpapovic can we close this issue then?
@aledbf, the problem still exist when you build cluster on AWS
@mpapovic please run the test from the bastion host. We have no control how we reach the cluster and the latencies we see from outside. If you run the test from the bastion you should see something like:
$ echo "GET http://echoheaders.uswest2-01.rocket-science.io" | vegeta attack -duration=10s -rate=300 | tee results.bin | vegeta report
Requests [total, rate] 3000, 300.10
Duration [total, attack, wait] 9.99967892s, 9.996665503s, 3.013417ms
Latencies [mean, 50, 95, 99, max] 4.155228ms, 4.015825ms, 5.744591ms, 8.710765ms, 40.174833ms
Bytes In [total, mean] 2058000, 686.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:3000
Error Set:
@aledbf These are the results from bastion
echo "GET https://domain.com/echo" | vegeta attack -duration=10s -rate=300 | vegeta report
Requests [total, rate] 3000, 300.10
Duration [total, attack, wait] 12.263949677s, 9.996665491s, 2.267284186s
Latencies [mean, 50, 95, 99, max] 189.420976ms, 137.12567ms, 596.73114ms, 1.277557503s, 3.490617397s
Bytes In [total, mean] 1636210, 545.40
Bytes Out [total, mean] 0, 0.00
Success [ratio] 98.73%
Status Codes [code:count] 0:38 200:2962
@mpapovic ok, that means you have networking issues in your cluster. This is not related to the ingress controller
Closing. Please use kubernetes-user slack channel to get help.
@aledbf this is the result from vegeta when i test direct on echo pod from other node, same network but without ingress
echo "GET http://100.96.19.70:8080" | ./vegeta attack -duration=10s -rate=300 | ./vegeta report
Requests [total, rate] 3000, 300.10
Duration [total, attack, wait] 9.997550249s, 9.996665412s, 884.837µs
Latencies [mean, 50, 95, 99, max] 1.355932ms, 961.755µs, 1.066723ms, 8.64987ms, 75.965425ms
Bytes In [total, mean] 933000, 311.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:3000
If you are sure that ingress is not the problem, can you give me a some hint in which part of the cluster i should look?
@mpapovic you need to test from the same node where the ingress controller is running against the endpoints (content of the upstream server in nginx.conf)
The problem was with SSL termination on ingress when using proxy-protocol=true. Ive changed SSL on AWS LB and now its ok. Your latency was good because your test was on http and mine was https.
Is this a BUG REPORT or FEATURE REQUEST?: /kind bug
What happened: When we are testing high load with vegeta and also we tried different scenarios with load impact, we get considerably worse performance than expected. We are running 6 nodes m3.xlarge with 10 echoheaders pods with nginx-ingress-controller as a DaemonSet (also we tried as Deployment and the results are same)
We tried with default kube network and also we created cluster with flannel, the results are the same.
These are the results with Vegeta
What you expected to happen: Latency should be smaller
How to reproduce it (as minimally and precisely as possible):
Cluster created with KOPS kops create cluster --cloud aws \ --node-count 6 \ --node-size m3.xlarge \ --zones eu-west-1a,eu-west-1b,eu-west-1c \ --master-size m3.large --master-zones eu-west-1a,eu-west-1b,eu-west-1c \ --dns-zone domain.com \ --topology private \ --networking flannel \ --bastion="true" \ --authorization=RBAC
This is my nginx controller config
Environment: