Closed dfateyev closed 3 years ago
Hi dfateyev, Can you give more background of what you're trying to achieve and the configuration you're using ? I'm curious about this kubernetes service.
Hi dfateyev, Can you give more background of what you're trying to achieve and the configuration you're using ? I'm curious about this kubernetes service.
Sure, we've been using an ingress resource to reach the cluster API endpoint (the "kubernetes"
service). The external path of this ingress is in the "server" parameter in the kubeconfig file. Thus, we're accessing the cluster API via ingress.
All usual requests (e.g. "kubectl get nodes"
) work just fine as for now, but the issue occurs when the connection implies the upgrade, e.g. in service port-forward requests, as above.
It looks like we're probably missing some options to enable connection upgrade for "haproxytech" controller, or some options in the ingress resource to enable the same.
We're going to switch from "jcmoraisjr/haproxy-ingress"
to "haproxytech"
— but are blocked with this functionality. As mentioned before, connection upgrade requests work fine in "jcmoraisjr/haproxy-ingress"
and NGINX controllers.
Thanks, just to keep you informed, I copied your ingress configuration and port forwarded to an arbitrary service on my cluster. Things went well. I'm a bit surprised by the URL of the POST inside your log. It has a /api/api. Is it correct?
I'm a bit surprised by the URL of the POST inside your log. It has a /api/api. Is it correct?
I think yes: it's a client-side log, the ingress path is "/api", and the second "api" seems belong to k8s API request.
So "/api/api/v1/..." is formed on the client (kubectl side), and by the controller it should be transformed to "/api/v1" with "path-rewrite" above. Generally, everything works for "kubectl get <something>"
requests, passed via ingress.
I used the "kubernetes"
service namely. The "POST" request above is produced by port-forward request passed by "kubectl" there.
Please let me know if you need any additional information on how to reproduce this case. For example, I can provide you with the cluster credentials where this case can be observed.
Hi dfateyev, After deeper examination, we're going to issue a change to allow your manipulation. Currently we're automatically setting h2 with ssl, with the use of the annotation "server-proto" set to "h1" along with "server-ssl" enabled, it should work. Indeed, connection upgrade is not possible with h2 protocol as you can read here : "HTTP/2 explicitly disallows the use of this mechanism/header; it is specific to HTTP/1.1.". But we need to make a small change in the management of annotations to preserve consistency. We'll keep you informed.
@ivanmatmati many thanks, please let me know when I can test these changes. This missing functionality has been blocking us for a while.
Hi, As promised a fix has been issued. You can test it on commit cd5a4516 from master branch. If you prefer it you can wait for a dev build of docker image. Anyway, please test it and give your feedback. After running the new version of controller, please add the following annotations to your ingress : haproxy.org/server-proto: h1 haproxy.org/server-ssl: 'true' It should work then. Have a great day.
hi @dfateyev,
starting from today, nigthly
images are available for all platforms,
today's nightly image was activated manually and it contains commit that Ivan mentioned
image can be pulled from dockerhub: haproxytech/kubernetes-ingress:nightly
Hello @ivanmatmati @oktalz, I have just checked the "nightly" image, and seems it works fine. Thanks for the quick fix, will wait for this improvement in the next version(s).
Hi, thanks for the report and feedback. Have a nice day.
Unfortunately, it turned out that SSL backend connections (that don't require connection upgrade in their work) have stopped working without the "haproxy.org/server-proto: h1"
annotation being set.
The ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-api
namespace: default
annotations:
kubernetes.io/ingress.class: haproxy
haproxy.org/path-rewrite: /api(/|$)(.*) /\2
haproxy.org/server-ssl: 'true'
haproxy.org/check: 'false'
spec:
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
The version 1.6.2, with non-upgrade connections (works fine):
$ kubectl --v=8 get pods
I0608 15:46:40.428354 121113 round_trippers.go:432] GET https://env-5802911.xxx/api/api/v1/namespaces/default/pods?limit=500
I0608 15:46:40.428365 121113 round_trippers.go:438] Request Headers:
I0608 15:46:40.428370 121113 round_trippers.go:442] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
I0608 15:46:40.428374 121113 round_trippers.go:442] User-Agent: kubectl/v1.21.1 (linux/amd64) kubernetes/5e58841
I0608 15:46:40.428380 121113 round_trippers.go:442] Authorization: Bearer <masked>
I0608 15:46:41.281155 121113 round_trippers.go:457] Response Status: 200 OK in 852 milliseconds
I0608 15:46:41.281173 121113 round_trippers.go:460] Response Headers:
I0608 15:46:41.281177 121113 round_trippers.go:463] Server: nginx
I0608 15:46:41.281180 121113 round_trippers.go:463] Date: Tue, 08 Jun 2021 09:46:41 GMT
I0608 15:46:41.281182 121113 round_trippers.go:463] Content-Type: application/json
I0608 15:46:41.281185 121113 round_trippers.go:463] Set-Cookie: route=481a9d4ad6aebdd346bde5f6bfbe4740; Path=/
I0608 15:46:41.281189 121113 round_trippers.go:463] Set-Cookie: SRVGROUP=common; path=/
I0608 15:46:41.281191 121113 round_trippers.go:463] Cache-Control: no-cache, private
I0608 15:46:41.416924 121113 request.go:1123] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"22136"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"integer","format":"","description":"The number of times the containers in this pod have been restarted.","priority":0},{"name":"Age","type":"string","fo [truncated 34650 chars]
The "nightly" build, with non-upgrade connections (fails):
$ kubectl --v=8 get pods
I0608 15:38:14.461460 120437 round_trippers.go:432] GET https://env-5802911.xxxx/api/api?timeout=32s
I0608 15:38:14.461467 120437 round_trippers.go:438] Request Headers:
I0608 15:38:14.461472 120437 round_trippers.go:442] User-Agent: kubectl/v1.21.1 (linux/amd64) kubernetes/5e58841
I0608 15:38:14.461476 120437 round_trippers.go:442] Accept: application/json, */*
I0608 15:38:14.461481 120437 round_trippers.go:442] Authorization: Bearer <masked>
I0608 15:38:15.247952 120437 round_trippers.go:457] Response Status: 502 Bad Gateway in 786 milliseconds
I0608 15:38:15.248088 120437 round_trippers.go:460] Response Headers:
I0608 15:38:15.248174 120437 round_trippers.go:463] Cache-Control: no-cache
I0608 15:38:15.248219 120437 round_trippers.go:463] Server: nginx
I0608 15:38:15.248253 120437 round_trippers.go:463] Date: Tue, 08 Jun 2021 09:38:15 GMT
I0608 15:38:15.248284 120437 round_trippers.go:463] Content-Type: text/html
I0608 15:38:15.248342 120437 round_trippers.go:463] Content-Length: 107
I0608 15:38:15.248368 120437 round_trippers.go:463] Set-Cookie: route=481a9d4ad6aebdd346bde5f6bfbe4740; Path=/
I0608 15:38:15.259070 120437 request.go:1123] Response Body: <html><body><h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
</body></html>
I0608 15:38:15.262050 120437 request.go:1347] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
I0608 15:38:15.262077 120437 cached_discovery.go:121] skipped caching discovery info due to an error on the server ("<html><body><h1>502 Bad Gateway</h1>\nThe server returned an invalid or incomplete response.\n</body></html>") has prevented the request from succeeding
Adding "haproxy.org/server-proto: h1"
fixes the issue for "nightly", but: do all users need to add a new annotation in addition to the existing "haproxy.org/server-ssl" one, for all SSL backends? It can be confusing.
The second point: with h1-annotation the "POST", "PUT" and "DELETE" methods appeared to be broken:
I0608 17:57:21.221308 128929 request.go:1123] Request Body: {"propagationPolicy":"Background"}
I0608 17:57:21.221479 128929 round_trippers.go:432] DELETE https://env-5802911.xxx/api/api/v1/namespaces/default/pods/hello-kubernetes-7497844c6b-ll2bc
I0608 17:57:21.221508 128929 round_trippers.go:438] Request Headers:
I0608 17:57:21.221527 128929 round_trippers.go:442] Accept: application/json
I0608 17:57:21.221542 128929 round_trippers.go:442] Content-Type: application/json
I0608 17:57:21.221559 128929 round_trippers.go:442] User-Agent: kubectl/v1.21.1 (linux/amd64) kubernetes/5e58841
I0608 17:57:21.221579 128929 round_trippers.go:442] Authorization: Bearer <masked>
I0608 17:57:21.313732 128929 round_trippers.go:457] Response Status: 501 Not Implemented in 92 milliseconds
I0608 17:57:21.313782 128929 round_trippers.go:460] Response Headers:
I0608 17:57:21.313803 128929 round_trippers.go:463] Server: nginx
I0608 17:57:21.313818 128929 round_trippers.go:463] Date: Tue, 08 Jun 2021 11:57:21 GMT
I0608 17:57:21.313828 128929 round_trippers.go:463] Content-Type: text/html
I0608 17:57:21.313838 128929 round_trippers.go:463] Content-Length: 136
I0608 17:57:21.313849 128929 round_trippers.go:463] Set-Cookie: route=481a9d4ad6aebdd346bde5f6bfbe4740; Path=/
I0608 17:57:21.313860 128929 round_trippers.go:463] Cache-Control: no-cache
I0608 17:57:21.313943 128929 request.go:1123] Response Body: <html><body><h1>501 Not Implemented</h1>
.The server does not support the functionality required to fulfill the request.
The "nginx" headers in responses above are from an NGINX intermediate proxy which talks to the cluster's "haproxy-ingress". It shouldn't be the culprit: if I install nginx-ingress or another haproxy ingress controller implementation to the cluster, I see no issues there at all.
Please consider re-opening the issue. Thanks for your efforts!
Hi @dfateyev, I've been able to delete a pod through API Server behind HAProxy with the following configuration :
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: kubernetes-api
namespace: default
annotations:
haproxy.org/check: 'false'
haproxy.org/path-rewrite: /api(/|$)(.*) /\2
haproxy.org/server-ssl: 'true'
kubernetes.io/ingress.class: haproxy
server-proto: h1
spec:
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
serviceName: kubernetes
servicePort: 443
The deletion of pod result was :
I0609 10:59:25.786481 33557 request.go:1123] Request Body: {"propagationPolicy":"Background"}
I0609 10:59:25.786632 33557 round_trippers.go:432] DELETE https://172.18.0.2:30443/api/api/v1/namespaces/haproxy-controller/pods/my-shell
I0609 10:59:25.786655 33557 round_trippers.go:438] Request Headers:
I0609 10:59:25.786673 33557 round_trippers.go:442] Accept: application/json
I0609 10:59:25.786719 33557 round_trippers.go:442] Content-Type: application/json
I0609 10:59:25.786741 33557 round_trippers.go:442] User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6
I0609 10:59:25.786762 33557 round_trippers.go:442] Authorization: Bearer <masked>
I0609 10:59:25.801547 33557 round_trippers.go:457] Response Status: 200 OK in 14 milliseconds
With a creation :
I0609 POST https://172.18.0.2:30443/api/api/v1/namespaces/haproxy-controller/pods/my-shell/attach?container=my-shell&stdin=true&stdout=true&tty=true
I0609 11:39:53.397697 41940 round_trippers.go:438] Request Headers:
I0609 11:39:53.397719 41940 round_trippers.go:442] X-Stream-Protocol-Version: v4.channel.k8s.io
I0609 11:39:53.397740 41940 round_trippers.go:442] X-Stream-Protocol-Version: v3.channel.k8s.io
I0609 11:39:53.397759 41940 round_trippers.go:442] X-Stream-Protocol-Version: v2.channel.k8s.io
I0609 11:39:53.397778 41940 round_trippers.go:442] X-Stream-Protocol-Version: channel.k8s.io
I0609 11:39:53.397796 41940 round_trippers.go:442] User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6
I0609 11:39:53.397818 41940 round_trippers.go:442] Authorization: Bearer <masked>
I0609 11:39:53.430502 41940 round_trippers.go:457] Response Status: 101 Switching Protocols in 32 milliseconds
I0609 11:39:53.430534 41940 round_trippers.go:460] Response Headers:
I0609 11:39:53.430544 41940 round_trippers.go:463] Connection: Upgrade
I0609 11:39:53.430553 41940 round_trippers.go:463] Upgrade: SPDY/3.1
I0609 11:39:53.430561 41940 round_trippers.go:463] X-Stream-Protocol-Version: v4.channel.k8s.io
I0609 11:39:53.430567 41940 round_trippers.go:463] Date: Wed, 09 Jun 2021 09:39:53 GMT
Let's see what can explain the different result (api version of ingresses, etc.). Meanwhile, could you recheck your configuration ? Concerning your first question, the server-proto annotation is mandatory only if you're going to upgrade connection in the scenario we followed.
I can reproduce the issue, connecting to the intermediate "nginx" proxy with "kubectl"
:
$ kubectl delete pod hello-kubernetes-7497844c6b-swct8
Error from server (InternalError): an error on the server ("<html><body><h1>501 Not Implemented</h1>\n.The server does not support the functionality required to fulfill the request.\n</body></html>") has prevented the request from succeeding (delete pods hello-kubernetes-7497844c6b-swct8)
$ /kubectl --v=8 delete pod hello-kubernetes-7497844c6b-swct8
I0609 19:39:02.540973 186366 round_trippers.go:421] GET https://env-5802911.xxx/api/apis/metrics.k8s.io/v1beta1?timeout=32s
I0609 19:39:02.540983 186366 round_trippers.go:428] Request Headers:
I0609 19:39:02.540988 186366 round_trippers.go:432] Accept: application/json, */*
I0609 19:39:02.540995 186366 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0609 19:39:02.541002 186366 round_trippers.go:432] Authorization: Bearer <masked>
I0609 19:39:02.950845 186366 round_trippers.go:447] Response Status: 503 Service Unavailable in 409 milliseconds
I0609 19:39:02.951001 186366 round_trippers.go:450] Response Headers:
I0609 19:39:02.951039 186366 round_trippers.go:453] Date: Wed, 09 Jun 2021 13:39:02 GMT
I0609 19:39:02.951120 186366 round_trippers.go:453] Content-Type: text/plain; charset=utf-8
I0609 19:39:02.951412 186366 round_trippers.go:453] Content-Length: 42
I0609 19:39:02.951515 186366 round_trippers.go:453] Set-Cookie: route=481a9d4ad6aebdd346bde5f6bfbe4740; Path=/
I0609 19:39:02.951573 186366 round_trippers.go:453] Cache-Control: no-cache, private
I0609 19:39:02.951593 186366 round_trippers.go:453] X-Content-Type-Options: nosniff
I0609 19:39:02.951677 186366 round_trippers.go:453] Server: nginx
I0609 19:39:02.958693 186366 request.go:1097] Response Body: invalid upgrade response: status code 200
I0609 19:39:02.961830 186366 request.go:1301] body was not decodable (unable to check for Status): Object 'Kind' is missing in 'invalid upgrade response: status code 200
'
I0609 19:39:02.961850 186366 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request
I0609 19:39:02.962361 186366 round_trippers.go:421] GET https://env-5802911.xxx/api/apis/metrics.k8s.io/v1beta1?timeout=32s
I0609 19:39:02.962372 186366 round_trippers.go:428] Request Headers:
I0609 19:39:02.962381 186366 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0609 19:39:02.962389 186366 round_trippers.go:432] Authorization: Bearer <masked>
I0609 19:39:02.962394 186366 round_trippers.go:432] Accept: application/json, */*
I0609 19:39:03.099223 186366 round_trippers.go:447] Response Status: 503 Service Unavailable in 136 milliseconds
I0609 19:39:03.099281 186366 round_trippers.go:450] Response Headers:
I0609 19:39:03.099303 186366 round_trippers.go:453] Date: Wed, 09 Jun 2021 13:39:03 GMT
I0609 19:39:03.099320 186366 round_trippers.go:453] Content-Type: text/plain; charset=utf-8
I0609 19:39:03.099336 186366 round_trippers.go:453] Content-Length: 42
I0609 19:39:03.099353 186366 round_trippers.go:453] Set-Cookie: route=fcbf49efb7b041d740922a1056f18e10; Path=/
I0609 19:39:03.099369 186366 round_trippers.go:453] Cache-Control: no-cache, private
I0609 19:39:03.099385 186366 round_trippers.go:453] X-Content-Type-Options: nosniff
I0609 19:39:03.099400 186366 round_trippers.go:453] Server: nginx
I0609 19:39:03.105760 186366 request.go:1097] Response Body: invalid upgrade response: status code 200
I0609 19:39:03.108846 186366 request.go:1301] body was not decodable (unable to check for Status): Object 'Kind' is missing in 'invalid upgrade response: status code 200
'
I0609 19:39:03.108867 186366 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request
I0609 19:39:03.108903 186366 shortcut.go:89] Error loading discovery information: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0609 19:39:03.109219 186366 round_trippers.go:421] GET https://env-5802911.xxx/api/apis/metrics.k8s.io/v1beta1?timeout=32s
I0609 19:39:03.109231 186366 round_trippers.go:428] Request Headers:
I0609 19:39:03.109237 186366 round_trippers.go:432] Accept: application/json, */*
I0609 19:39:03.109244 186366 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0609 19:39:03.109254 186366 round_trippers.go:432] Authorization: Bearer <masked>
I0609 19:39:03.240032 186366 round_trippers.go:447] Response Status: 503 Service Unavailable in 130 milliseconds
I0609 19:39:03.240089 186366 round_trippers.go:450] Response Headers:
I0609 19:39:03.240106 186366 round_trippers.go:453] Content-Length: 42
I0609 19:39:03.240119 186366 round_trippers.go:453] Set-Cookie: route=481a9d4ad6aebdd346bde5f6bfbe4740; Path=/
I0609 19:39:03.240132 186366 round_trippers.go:453] Cache-Control: no-cache, private
I0609 19:39:03.240143 186366 round_trippers.go:453] X-Content-Type-Options: nosniff
I0609 19:39:03.240161 186366 round_trippers.go:453] Server: nginx
I0609 19:39:03.240172 186366 round_trippers.go:453] Date: Wed, 09 Jun 2021 13:39:03 GMT
I0609 19:39:03.240190 186366 round_trippers.go:453] Content-Type: text/plain; charset=utf-8
I0609 19:39:03.246421 186366 request.go:1097] Response Body: invalid upgrade response: status code 200
I0609 19:39:03.249737 186366 request.go:1301] body was not decodable (unable to check for Status): Object 'Kind' is missing in 'invalid upgrade response: status code 200
'
I0609 19:39:03.249760 186366 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request
I0609 19:39:03.251065 186366 request.go:1097] Request Body: {"propagationPolicy":"Background"}
I0609 19:39:03.251112 186366 round_trippers.go:421] DELETE https://env-5802911.xxx/api/api/v1/namespaces/default/pods/hello-kubernetes-7497844c6b-swct8
I0609 19:39:03.251123 186366 round_trippers.go:428] Request Headers:
I0609 19:39:03.251133 186366 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0609 19:39:03.251143 186366 round_trippers.go:432] Authorization: Bearer <masked>
I0609 19:39:03.251151 186366 round_trippers.go:432] Accept: application/json
I0609 19:39:03.251159 186366 round_trippers.go:432] Content-Type: application/json
I0609 19:39:03.373946 186366 round_trippers.go:447] Response Status: 501 Not Implemented in 122 milliseconds
I0609 19:39:03.374013 186366 round_trippers.go:450] Response Headers:
I0609 19:39:03.374045 186366 round_trippers.go:453] Server: nginx
I0609 19:39:03.374071 186366 round_trippers.go:453] Date: Wed, 09 Jun 2021 13:39:03 GMT
I0609 19:39:03.374096 186366 round_trippers.go:453] Content-Type: text/html
I0609 19:39:03.374116 186366 round_trippers.go:453] Content-Length: 136
I0609 19:39:03.374137 186366 round_trippers.go:453] Set-Cookie: route=fcbf49efb7b041d740922a1056f18e10; Path=/
I0609 19:39:03.374165 186366 round_trippers.go:453] Cache-Control: no-cache
I0609 19:39:03.374240 186366 request.go:1097] Response Body: <html><body><h1>501 Not Implemented</h1>
.The server does not support the functionality required to fulfill the request.
</body></html>
I0609 19:39:03.374943 186366 helpers.go:216] server response object: [{
"metadata": {},
"status": "Failure",
"message": "an error on the server (\"\u003chtml\u003e\u003cbody\u003e\u003ch1\u003e501 Not Implemented\u003c/h1\u003e\\n.The server does not support the functionality required to fulfill the request.\\n\u003c/body\u003e\u003c/html\u003e\") has prevented the request from succeeding (delete pods hello-kubernetes-7497844c6b-swct8)",
"reason": "InternalError",
"details": {
"name": "hello-kubernetes-7497844c6b-swct8",
"kind": "pods",
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "\u003chtml\u003e\u003cbody\u003e\u003ch1\u003e501 Not Implemented\u003c/h1\u003e\n.The server does not support the functionality required to fulfill the request.\n\u003c/body\u003e\u003c/html\u003e"
}
]
},
"code": 501
}]
F0609 19:39:03.375039 186366 helpers.go:115] Error from server (InternalError): an error on the server ("<html><body><h1>501 Not Implemented</h1>\n.The server does not support the functionality required to fulfill the request.\n</body></html>") has prevented the request from succeeding (delete pods hello-kubernetes-7497844c6b-swct8)
At the same time, surprisingly I can delete the Pod, using the same API endpoint, as above, but accessing directly with "curl"
:
$ curl -Ik -X DELETE https://env-5802911.xxx/api/api/v1/namespaces/default/pods/hello-kubernetes-7497844c6b-swct8 --header "Authorization: Bearer <mytoken>"
HTTP/2 200
server: nginx
date: Wed, 09 Jun 2021 13:46:08 GMT
content-type: application/json
set-cookie: route=481a9d4ad6aebdd346bde5f6bfbe4740; Path=/
cache-control: no-cache, private
set-cookie: SRVGROUP=common; path=/
I think there is some discrepancy in the connection upgrade logic or protocols used. As mentioned before, if I replace "haproxy" ingress with e.g. NGINX (not touching the intermediate balancer or anything) — everything starts working.
Also, I have a side question: it looks like the controller doesn't create a self-signed certificate for SSL by default. The HTTPS support is advertised in controller startup output:
2021/06/08 10:00:23 ConfigMap: default/haproxy-configmap
2021/06/08 10:00:23 Ingress class: haproxy
2021/06/08 10:00:23 Empty Ingress class: true
2021/06/08 10:00:23 Publish service:
2021/06/08 10:00:23 Default backend service: haproxy-controller/ingress-default-backend
2021/06/08 10:00:23 Default ssl certificate:
2021/06/08 10:00:23 Frontend HTTP listening on: 0.0.0.0:80
2021/06/08 10:00:23 Frontend HTTPS listening on: 0.0.0.0:443
2021/06/08 10:00:23 Controller sync period: 5s
...
But, actually 443/tcp answers plain HTTP:
$ kubectl -n haproxy-controller get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-ingress ClusterIP 10.244.250.51 <none> 80/TCP,443/TCP,1024/TCP 28h
ingress-default-backend ClusterIP 10.244.124.182 <none> 8080/TCP 28h
$ curl http://10.244.250.51/
<root ingress content>
$ curl -k https://10.244.250.51/
curl: (35) SSL received a record that exceeded the maximum permissible length.
Is it the intended behavior? If yes, how to automatically create a self-signed certificate for haproxy controller with DaemonSet?
Hi, can you upload your NGINX proxy configuration ? With a simple SSL + proxy configuration with NGINX, the manipulation worked as expected :
I0610 16:48:13.672052 69598 round_trippers.go:432] DELETE https://172.18.0.2:31443/api/api/v1/namespaces/haproxy-controller/pods/my-other-shell
I0610 16:48:13.672063 69598 round_trippers.go:438] Request Headers:
I0610 16:48:13.672071 69598 round_trippers.go:442] Authorization: Bearer <masked>
I0610 16:48:13.672080 69598 round_trippers.go:442] Accept: application/json
I0610 16:48:13.672090 69598 round_trippers.go:442] Content-Type: application/json
I0610 16:48:13.672097 69598 round_trippers.go:442] User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6
I0610 16:48:13.682842 69598 round_trippers.go:457] Response Status: 200 OK in 10 milliseconds
I0610 16:48:13.682900 69598 round_trippers.go:460] Response Headers:
I0610 16:48:13.682921 69598 round_trippers.go:463] Cache-Control: no-cache, private
I0610 16:48:13.682931 69598 round_trippers.go:463] Server: nginx/1.21.0
I0610 16:48:13.682939 69598 round_trippers.go:463] Date: Thu, 10 Jun 2021 14:48:13 GMT
I0610 16:48:13.682946 69598 round_trippers.go:463] Content-Type: application/json
I0610 16:48:13.682953 69598 round_trippers.go:463] Connection: keep-alive
TBH, the configuration is pretty common, except for the upstream managements (which seems not the issue here):
http {
server_tokens off ;
include /etc/nginx/mime.types;
default_type application/octet-stream;
set_real_ip_from 192.168.0.0/16;
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/16;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
log_format main '$remote_addr:$http_x_remote_port - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
client_header_timeout 10m;
client_body_timeout 10m;
send_timeout 10m;
client_max_body_size 100m;
connection_pool_size 256;
client_header_buffer_size 1k;
large_client_header_buffers 4 2k;
request_pool_size 4k;
gzip_min_length 1100;
gzip_buffers 4 8k;
gzip_types text/plain;
output_buffers 1 32k;
postpone_output 1460;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 75 20;
ignore_invalid_headers on;
map $upstream_addr $group {
default "";
~10\.34\.2\.18\:80$ common; ~10\.34\.2\.19\:80$ common;
}
upstream default_upstream{
server 10.34.2.18; server 10.34.2.19;
sticky path=/; keepalive 100;
}
upstream common { server 10.34.2.18 ; server 10.34.2.19 ; sticky path=/; keepalive 100; }
...
server {
listen 443 quic reuseport;
listen 443 http2 ssl;
listen [::]:443 quic reuseport;
listen [::]:443 http2 ssl;
server_name _;
ssl_certificate /var/lib/cert/SSL/my.chain;
ssl_certificate_key /var/lib/cert/SSL/my.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
add_header alt-svc 'h3-23=":443"; ma=86400';
access_log /var/log/nginx/localhost.access_log main;
error_log /var/log/nginx/localhost.error_log info;
proxy_temp_path /var/nginx/tmp/;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
set $upstream_name common;
include conf.d/ssl.upstreams.inc;
proxy_pass http://$upstream_name;
proxy_next_upstream error;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Host $http_host;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-URI $uri;
proxy_set_header X-ARGS $args;
proxy_set_header Refer $http_refer;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Ssl-Offloaded "1";
}
}
}
I can provide the access to a test installation (including the balancer and the cluster itself), where these details can be practically observed.
Hi @dfateyev, I was able to repoduce your issue. Can you confirm it works when you remove the following section ?
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
If this solution is not viable in your case, could you provide the resulting haproxy configuration file with the other haproxy IC where it worked ?
I can confirm that after removing "Upgrade"-related headers the DELETE method and other functionality started to work. But I suppose, that it breaks other proxied connections, that rely on these headers.
As for the alternative controller configuration, you can try this one. It's a DaemonSet that rolls out the controller across worker nodes and binds to 80,443/tcp hostPort.
The ingress resource is as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-api
namespace: default
annotations:
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/secure-backends: 'true'
kubernetes.io/ingress.class: haproxy
ingress.kubernetes.io/ssl-redirect: 'false'
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: kubernetes
servicePort: 443
It works fine with the NGINX balancer, including "Upgrade"-headers usage, etc. in its configuration:
$ kubectl --v=8 delete pod hello-kubernetes-654bc95db8-cmdms
I0614 21:43:30.801669 39556 round_trippers.go:421] GET https://env-9920574.xxx.com/api/apis/metrics.k8s.io/v1beta1?timeout=32s
I0614 21:43:30.801680 39556 round_trippers.go:428] Request Headers:
I0614 21:43:30.801685 39556 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0614 21:43:30.801690 39556 round_trippers.go:432] Authorization: Bearer <masked>
I0614 21:43:30.801694 39556 round_trippers.go:432] Accept: application/json, */*
I0614 21:43:31.245611 39556 round_trippers.go:447] Response Status: 500 Internal Server Error in 443 milliseconds
I0614 21:43:31.245658 39556 round_trippers.go:450] Response Headers:
I0614 21:43:31.245673 39556 round_trippers.go:453] Date: Mon, 14 Jun 2021 15:43:31 GMT
I0614 21:43:31.245687 39556 round_trippers.go:453] Content-Type: text/plain; charset=utf-8
I0614 21:43:31.245699 39556 round_trippers.go:453] Content-Length: 42
I0614 21:43:31.245715 39556 round_trippers.go:453] Set-Cookie: route=3aca0a5b4059e18a5e6d59c65cd1a688; Path=/
I0614 21:43:31.245730 39556 round_trippers.go:453] Cache-Control: no-cache, private
I0614 21:43:31.245745 39556 round_trippers.go:453] X-Content-Type-Options: nosniff
I0614 21:43:31.245760 39556 round_trippers.go:453] Server: nginx
I0614 21:43:31.253497 39556 request.go:1097] Response Body: invalid upgrade response: status code 200
I0614 21:43:31.256239 39556 request.go:1301] body was not decodable (unable to check for Status): Object 'Kind' is missing in 'invalid upgrade response: status code 200
'
I0614 21:43:31.256296 39556 cached_discovery.go:78] skipped caching discovery info due to an error on the server ("invalid upgrade response: status code 200") has prevented the request from succeeding
I0614 21:43:31.258329 39556 round_trippers.go:421] GET https://env-9920574.xxx.com/api/apis/metrics.k8s.io/v1beta1?timeout=32s
I0614 21:43:31.258381 39556 round_trippers.go:428] Request Headers:
I0614 21:43:31.258445 39556 round_trippers.go:432] Accept: application/json, */*
I0614 21:43:31.258483 39556 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0614 21:43:31.258526 39556 round_trippers.go:432] Authorization: Bearer <masked>
I0614 21:43:31.389276 39556 round_trippers.go:447] Response Status: 500 Internal Server Error in 130 milliseconds
I0614 21:43:31.389294 39556 round_trippers.go:450] Response Headers:
I0614 21:43:31.389301 39556 round_trippers.go:453] Set-Cookie: route=f3566a18648aa62c3cc911b9130f044b; Path=/
I0614 21:43:31.389324 39556 round_trippers.go:453] Cache-Control: no-cache, private
I0614 21:43:31.389327 39556 round_trippers.go:453] X-Content-Type-Options: nosniff
I0614 21:43:31.389330 39556 round_trippers.go:453] Server: nginx
I0614 21:43:31.389338 39556 round_trippers.go:453] Date: Mon, 14 Jun 2021 15:43:31 GMT
I0614 21:43:31.389341 39556 round_trippers.go:453] Content-Type: text/plain; charset=utf-8
I0614 21:43:31.389345 39556 round_trippers.go:453] Content-Length: 42
I0614 21:43:31.396207 39556 request.go:1097] Response Body: invalid upgrade response: status code 200
I0614 21:43:31.398151 39556 request.go:1301] body was not decodable (unable to check for Status): Object 'Kind' is missing in 'invalid upgrade response: status code 200
'
I0614 21:43:31.398165 39556 cached_discovery.go:78] skipped caching discovery info due to an error on the server ("invalid upgrade response: status code 200") has prevented the request from succeeding
I0614 21:43:31.398233 39556 shortcut.go:89] Error loading discovery information: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: an error on the server ("invalid upgrade response: status code 200") has prevented the request from succeeding
I0614 21:43:31.398627 39556 round_trippers.go:421] GET https://env-9920574.xxx.com/api/apis/metrics.k8s.io/v1beta1?timeout=32s
I0614 21:43:31.398652 39556 round_trippers.go:428] Request Headers:
I0614 21:43:31.398661 39556 round_trippers.go:432] Accept: application/json, */*
I0614 21:43:31.398670 39556 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0614 21:43:31.398681 39556 round_trippers.go:432] Authorization: Bearer <masked>
I0614 21:43:31.636638 39556 round_trippers.go:447] Response Status: 500 Internal Server Error in 237 milliseconds
I0614 21:43:31.636655 39556 round_trippers.go:450] Response Headers:
I0614 21:43:31.636663 39556 round_trippers.go:453] Server: nginx
I0614 21:43:31.636667 39556 round_trippers.go:453] Date: Mon, 14 Jun 2021 15:43:31 GMT
I0614 21:43:31.636671 39556 round_trippers.go:453] Content-Type: text/plain; charset=utf-8
I0614 21:43:31.636675 39556 round_trippers.go:453] Content-Length: 42
I0614 21:43:31.636680 39556 round_trippers.go:453] Set-Cookie: route=3aca0a5b4059e18a5e6d59c65cd1a688; Path=/
I0614 21:43:31.636684 39556 round_trippers.go:453] Cache-Control: no-cache, private
I0614 21:43:31.636688 39556 round_trippers.go:453] X-Content-Type-Options: nosniff
I0614 21:43:31.643345 39556 request.go:1097] Response Body: invalid upgrade response: status code 200
I0614 21:43:31.645142 39556 request.go:1301] body was not decodable (unable to check for Status): Object 'Kind' is missing in 'invalid upgrade response: status code 200
'
I0614 21:43:31.645159 39556 cached_discovery.go:78] skipped caching discovery info due to an error on the server ("invalid upgrade response: status code 200") has prevented the request from succeeding
I0614 21:43:31.647311 39556 request.go:1097] Request Body: {"propagationPolicy":"Background"}
I0614 21:43:31.647359 39556 round_trippers.go:421] DELETE https://env-9920574.xxx.com/api/api/v1/namespaces/default/pods/hello-kubernetes-654bc95db8-cmdms
I0614 21:43:31.647366 39556 round_trippers.go:428] Request Headers:
I0614 21:43:31.647371 39556 round_trippers.go:432] Accept: application/json
I0614 21:43:31.647377 39556 round_trippers.go:432] Content-Type: application/json
I0614 21:43:31.647382 39556 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0614 21:43:31.647388 39556 round_trippers.go:432] Authorization: Bearer <masked>
I0614 21:43:31.843382 39556 round_trippers.go:447] Response Status: 200 OK in 195 milliseconds
I0614 21:43:31.843463 39556 round_trippers.go:450] Response Headers:
I0614 21:43:31.843494 39556 round_trippers.go:453] Set-Cookie: route=f3566a18648aa62c3cc911b9130f044b; Path=/
I0614 21:43:31.843521 39556 round_trippers.go:453] Set-Cookie: SRVGROUP=common; path=/
I0614 21:43:31.843538 39556 round_trippers.go:453] Cache-Control: no-cache, private
I0614 21:43:31.843551 39556 round_trippers.go:453] Server: nginx
I0614 21:43:31.843564 39556 round_trippers.go:453] Date: Mon, 14 Jun 2021 15:43:31 GMT
I0614 21:43:31.843587 39556 round_trippers.go:453] Content-Type: application/json
I0614 21:43:31.843837 39556 request.go:1097] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"hello-kubernetes-654bc95db8-cmdms","generateName":"hello-kubernetes-654bc95db8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-kubernetes-654bc95db8-cmdms","uid":"f4de3c1e-532e-4fb0-a270-a4c2aa45e1bd","resourceVersion":"13373","creationTimestamp":"2021-06-14T15:28:54Z","deletionTimestamp":"2021-06-14T15:44:01Z","deletionGracePeriodSeconds":30,"labels":{"app":"hello-kubernetes","pod-template-hash":"654bc95db8"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"hello-kubernetes-654bc95db8","uid":"fd7402cc-3547-4181-90a1-1bc959afedee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-06-14T15:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd7402cc-3547-4181-90a1-1bc959afedee\"}":{".":{}," [truncated 3398 chars]
pod "hello-kubernetes-654bc95db8-cmdms" deleted
I0614 21:43:31.844911 39556 round_trippers.go:421] GET https://env-9920574.xxx.com/api/api/v1/namespaces/default/pods?fieldSelector=metadata.name%3Dhello-kubernetes-654bc95db8-cmdms
I0614 21:43:31.844951 39556 round_trippers.go:428] Request Headers:
I0614 21:43:31.844985 39556 round_trippers.go:432] Authorization: Bearer <masked>
I0614 21:43:31.845014 39556 round_trippers.go:432] Accept: application/json
I0614 21:43:31.845043 39556 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0614 21:43:32.047057 39556 round_trippers.go:447] Response Status: 200 OK in 201 milliseconds
I0614 21:43:32.047108 39556 round_trippers.go:450] Response Headers:
I0614 21:43:32.047123 39556 round_trippers.go:453] Server: nginx
I0614 21:43:32.047137 39556 round_trippers.go:453] Date: Mon, 14 Jun 2021 15:43:31 GMT
I0614 21:43:32.047149 39556 round_trippers.go:453] Content-Type: application/json
I0614 21:43:32.047162 39556 round_trippers.go:453] Set-Cookie: route=3aca0a5b4059e18a5e6d59c65cd1a688; Path=/
I0614 21:43:32.047174 39556 round_trippers.go:453] Set-Cookie: SRVGROUP=common; path=/
I0614 21:43:32.047186 39556 round_trippers.go:453] Cache-Control: no-cache, private
I0614 21:43:32.047494 39556 request.go:1097] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"13385"},"items":[{"metadata":{"name":"hello-kubernetes-654bc95db8-cmdms","generateName":"hello-kubernetes-654bc95db8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-kubernetes-654bc95db8-cmdms","uid":"f4de3c1e-532e-4fb0-a270-a4c2aa45e1bd","resourceVersion":"13373","creationTimestamp":"2021-06-14T15:28:54Z","deletionTimestamp":"2021-06-14T15:44:01Z","deletionGracePeriodSeconds":30,"labels":{"app":"hello-kubernetes","pod-template-hash":"654bc95db8"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"hello-kubernetes-654bc95db8","uid":"fd7402cc-3547-4181-90a1-1bc959afedee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-06-14T15:28:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash [truncated 3498 chars]
I0614 21:43:32.049261 39556 round_trippers.go:421] GET https://env-9920574.xxx.com/api/api/v1/namespaces/default/pods?fieldSelector=metadata.name%3Dhello-kubernetes-654bc95db8-cmdms&resourceVersion=13385&watch=true
I0614 21:43:32.049302 39556 round_trippers.go:428] Request Headers:
I0614 21:43:32.049320 39556 round_trippers.go:432] Accept: application/json
I0614 21:43:32.049335 39556 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0614 21:43:32.049364 39556 round_trippers.go:432] Authorization: Bearer <masked>
I0614 21:43:33.168454 39556 round_trippers.go:447] Response Status: 200 OK in 1119 milliseconds
I0614 21:43:33.168516 39556 round_trippers.go:450] Response Headers:
I0614 21:43:33.168545 39556 round_trippers.go:453] Server: nginx
I0614 21:43:33.168568 39556 round_trippers.go:453] Date: Mon, 14 Jun 2021 15:43:32 GMT
I0614 21:43:33.168589 39556 round_trippers.go:453] Content-Type: application/json
I0614 21:43:33.168609 39556 round_trippers.go:453] Set-Cookie: route=f3566a18648aa62c3cc911b9130f044b; Path=/
I0614 21:43:33.168634 39556 round_trippers.go:453] Set-Cookie: SRVGROUP=common; path=/
I0614 21:43:33.168656 39556 round_trippers.go:453] Cache-Control: no-cache, private
Hi @dfateyev Thanks for the details you have been providing to resolve this issue. IMO, chaining two different proxies where each one is enforcing its own configuration (headers and protocols) can be error prone and hard to debug. But since you've been already using another HAProxy IC without issues, then this should be straightforward to handle and we will need the following:
jcmoraisjr/haproxy-ingress
where everything is fine.haproxytech/haproxy-ingress
where connection upgrade is working but not the PUT, DELETE and POST operationsWe will compare both configurations, if smth is different (regarding the kubernetes backend in question) then we will be able to quickly adress it, if there is no difference then the issue is not with HAProxy config
Hello @Mo3m3n @ivanmatmati , here are two configurations files, provided "as-is".
1) jcmoraisjr
, where we haven't observed issues:
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# #
# # HAProxy Ingress Controller
# # --------------------------
# # This file is automatically updated, do not edit
# #
#
global
daemon
unix-bind user haproxy group haproxy mode 0600
nbthread 2
cpu-map auto:1/1-2 0-1
stats socket /var/run/haproxy-stats.sock level admin expose-fd listeners mode 600
maxconn 2000
hard-stop-after 10m
lua-load /usr/local/etc/haproxy/lua/auth-request.lua
lua-load /usr/local/etc/haproxy/lua/services.lua
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-server-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
defaults
log global
maxconn 2000
option redispatch
option dontlognull
option http-server-close
option http-keep-alive
timeout client 50s
timeout client-fin 50s
timeout connect 5s
timeout http-keep-alive 1m
timeout http-request 5s
timeout queue 5s
timeout server 50s
timeout server-fin 50s
timeout tunnel 1h
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# #
# # BACKENDS
# #
#
backend default_hello-kubernetes_8080
mode http
balance roundrobin
acl https-request ssl_fc
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
reqrep ^([^:\ ]*)\ //?(.*)$ \1\ /\2
http-response set-header Strict-Transport-Security "max-age=15768000" if https-request
server srv001 10.239.224.4:8080 weight 1 check inter 2s
server srv002 10.239.32.1:8080 weight 1 check inter 2s
server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv008 127.0.0.1:1023 disabled weight 1 check inter 2s
backend default_kubernetes_6443
mode http
balance roundrobin
acl https-request ssl_fc
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
reqrep ^([^:\ ]*)\ /api/?(.*)$ \1\ /\2
http-response set-header Strict-Transport-Security "max-age=15768000" if https-request
server srv001 10.34.6.100:6443 weight 1 ssl verify none check inter 2s
server srv002 10.34.6.95:6443 weight 1 ssl verify none check inter 2s
server srv003 10.34.6.98:6443 weight 1 ssl verify none check inter 2s
server srv004 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
server srv005 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
server srv006 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
server srv007 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
server srv008 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
server srv009 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
backend kubernetes-dashboard_kubernetes-dashboard_8443
mode http
balance roundrobin
acl https-request ssl_fc
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
reqrep ^([^:\ ]*)\ /kubernetes-dashboard/?(.*)$ \1\ /\2
http-response set-header Strict-Transport-Security "max-age=15768000" if https-request
server srv001 10.239.224.2:8443 weight 1 ssl verify none check inter 2s
server srv002 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
server srv003 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
server srv004 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
server srv005 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
server srv006 127.0.0.1:1023 disabled weight 1 ssl verify none check inter 2s
backend _default_backend
mode http
balance roundrobin
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
server srv001 10.239.32.5:8080 weight 1 check inter 2s
server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# #
# # FRONTENDS
# #
#
# # # # # # # # # # # # # # # # # # #
# #
# HTTP frontend
#
frontend _front_http
mode http
bind *:80
http-request set-var(req.base) base,lower,regsub(:[0-9]+/,/)
http-request redirect scheme https if { var(req.base),map_beg(/etc/haproxy/maps/_global_https_redir.map) yes }
http-request set-header X-Forwarded-Proto http
http-request del-header X-SSL-Client-CN
http-request del-header X-SSL-Client-DN
http-request del-header X-SSL-Client-SHA1
http-request del-header X-SSL-Client-Cert
http-request set-var(req.backend) var(req.base),map_beg(/etc/haproxy/maps/_global_http_front.map)
use_backend %[var(req.backend)] if { var(req.backend) -m found }
use_backend kubernetes-dashboard_kubernetes-dashboard_8443 if { path_beg /kubernetes-dashboard }
use_backend default_kubernetes_6443 if { path_beg /api }
use_backend default_hello-kubernetes_8080
default_backend _default_backend
# # # # # # # # # # # # # # # # # # #
# #
# HTTPS frontend
#
frontend _front001
mode http
bind *:443 ssl alpn h2,http/1.1 crt-list /etc/haproxy/maps/_front001_bind_crt.list ca-ignore-err all crt-ignore-err all
http-request set-var(req.hostbackend) base,lower,regsub(:[0-9]+/,/),map_beg(/etc/haproxy/maps/_front001_host.map)
http-request set-header X-Forwarded-Proto https
http-request del-header X-SSL-Client-CN
http-request del-header X-SSL-Client-DN
http-request del-header X-SSL-Client-SHA1
http-request del-header X-SSL-Client-Cert
use_backend %[var(req.hostbackend)] if { var(req.hostbackend) -m found }
use_backend kubernetes-dashboard_kubernetes-dashboard_8443 if { path_beg /kubernetes-dashboard }
use_backend default_kubernetes_6443 if { path_beg /api }
use_backend default_hello-kubernetes_8080
default_backend _default_backend
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# #
# # SUPPORT
# #
#
# # # # # # # # # # # # # # # # # # #
# #
# Stats
#
listen stats
mode http
bind *:1936
stats enable
stats uri /
no log
option forceclose
stats show-legends
# # # # # # # # # # # # # # # # # # #
# #
# Monitor URI
#
frontend healthz
mode http
bind *:10253
monitor-uri /healthz
no log
2) haproxytech
where is the issue, discussed in this ticket:
# _version=14
# HAProxy Technologies
# https://www.haproxy.com/
# this file is not meant to be changed directly
# it is under haproxy ingress controller management
global
daemon
localpeer local
master-worker
pidfile /var/run/haproxy.pid
stats socket /var/run/haproxy-runtime-api.sock level admin
stats timeout 60000
tune.ssl.default-dh-param 2048
ssl-default-bind-options no-sslv3 no-tls-tickets no-tlsv10
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
log 127.0.0.1:514 local0 notice
server-state-file global
server-state-base /var/state/haproxy/
defaults
log global
log-format '%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs "%HM %[var(txn.base)] %HV"'
option redispatch 0
option dontlognull
option http-keep-alive
timeout http-request 5000
timeout connect 5000
timeout client 50000
timeout queue 5000
timeout server 50000
timeout tunnel 3600000
timeout http-keep-alive 60000
load-server-state-from-file global
peers localinstance
peer local 127.0.0.1:10000
frontend healthz
mode http
bind 0.0.0.0:1042 name v4
bind :::1042 name v6 v4v6
monitor-uri /healthz
option dontlog-normal
frontend http
mode http
bind 0.0.0.0:80 name v4
bind :::80 name v6
http-request set-var(txn.base) base
http-request set-var(txn.path) path
http-request set-var(txn.host) req.hdr(Host),field(1,:),lower
http-request set-var(txn.host_match) var(txn.host),map(/etc/haproxy/maps/host.map)
http-request set-var(txn.host_match) var(txn.host),regsub(^[^.]*,,),map(/etc/haproxy/maps/host.map,'') if !{ var(txn.host_match) -m found }
http-request set-var(txn.path_match) var(txn.host_match),concat(,txn.path,),map(/etc/haproxy/maps/path-exact.map)
http-request set-var(txn.path_match) var(txn.host_match),concat(,txn.path,),map_beg(/etc/haproxy/maps/path-prefix.map) if !{ var(txn.path_match) -m found }
http-request replace-path /api(/|$)(.*) /\2 if { var(txn.path_match) -m dom 94a5105484d914ea68b558c3662d581e }
http-request replace-path /kubernetes-dashboard(/|$)(.*) /\2 if { var(txn.path_match) -m dom 3c9d9f4fd27c5ae518111b33de8d9ca3 }
use_backend %[var(txn.path_match),field(1,.)]
default_backend haproxy-controller-ingress-default-backend-port-1
frontend https
mode http
bind 0.0.0.0:443 name v4
bind :::443 name v6
http-request set-var(txn.base) base
http-request set-var(txn.path) path
http-request set-var(txn.host) req.hdr(Host),field(1,:),lower
http-request set-var(txn.host_match) var(txn.host),map(/etc/haproxy/maps/host.map)
http-request set-var(txn.host_match) var(txn.host),regsub(^[^.]*,,),map(/etc/haproxy/maps/host.map,'') if !{ var(txn.host_match) -m found }
http-request set-var(txn.path_match) var(txn.host_match),concat(,txn.path,),map(/etc/haproxy/maps/path-exact.map)
http-request set-var(txn.path_match) var(txn.host_match),concat(,txn.path,),map_beg(/etc/haproxy/maps/path-prefix.map) if !{ var(txn.path_match) -m found }
http-request set-header X-Forwarded-Proto https
http-request replace-path /api(/|$)(.*) /\2 if { var(txn.path_match) -m dom 94a5105484d914ea68b558c3662d581e }
http-request replace-path /kubernetes-dashboard(/|$)(.*) /\2 if { var(txn.path_match) -m dom 3c9d9f4fd27c5ae518111b33de8d9ca3 }
use_backend %[var(txn.path_match),field(1,.)]
default_backend haproxy-controller-ingress-default-backend-port-1
frontend stats
mode http
bind *:1024
bind :::1024 name v6
stats enable
stats uri /
stats refresh 10s
http-request set-var(txn.base) base
http-request use-service prometheus-exporter if { path /metrics }
backend default-hello-kubernetes-80
mode http
balance roundrobin
option forwardfor
server SRV_1 10.239.160.4:8080 check weight 128
server SRV_2 10.239.32.3:8080 check weight 128
server SRV_3 127.0.0.1:8080 check disabled weight 128
server SRV_4 127.0.0.1:8080 check disabled weight 128
server SRV_5 127.0.0.1:8080 check disabled weight 128
server SRV_6 127.0.0.1:8080 check disabled weight 128
server SRV_7 127.0.0.1:8080 check disabled weight 128
server SRV_8 127.0.0.1:8080 check disabled weight 128
server SRV_9 127.0.0.1:8080 check disabled weight 128
server SRV_10 127.0.0.1:8080 check disabled weight 128
server SRV_11 127.0.0.1:8080 check disabled weight 128
server SRV_12 127.0.0.1:8080 check disabled weight 128
server SRV_13 127.0.0.1:8080 check disabled weight 128
server SRV_14 127.0.0.1:8080 check disabled weight 128
server SRV_15 127.0.0.1:8080 check disabled weight 128
server SRV_16 127.0.0.1:8080 check disabled weight 128
server SRV_17 127.0.0.1:8080 check disabled weight 128
server SRV_18 127.0.0.1:8080 check disabled weight 128
server SRV_19 127.0.0.1:8080 check disabled weight 128
server SRV_20 127.0.0.1:8080 check disabled weight 128
server SRV_21 127.0.0.1:8080 check disabled weight 128
server SRV_22 127.0.0.1:8080 check disabled weight 128
server SRV_23 127.0.0.1:8080 check disabled weight 128
server SRV_24 127.0.0.1:8080 check disabled weight 128
server SRV_25 127.0.0.1:8080 check disabled weight 128
server SRV_26 127.0.0.1:8080 check disabled weight 128
server SRV_27 127.0.0.1:8080 check disabled weight 128
server SRV_28 127.0.0.1:8080 check disabled weight 128
server SRV_29 127.0.0.1:8080 check disabled weight 128
server SRV_30 127.0.0.1:8080 check disabled weight 128
server SRV_31 127.0.0.1:8080 check disabled weight 128
server SRV_32 127.0.0.1:8080 check disabled weight 128
server SRV_33 127.0.0.1:8080 check disabled weight 128
server SRV_34 127.0.0.1:8080 check disabled weight 128
server SRV_35 127.0.0.1:8080 check disabled weight 128
server SRV_36 127.0.0.1:8080 check disabled weight 128
server SRV_37 127.0.0.1:8080 check disabled weight 128
server SRV_38 127.0.0.1:8080 check disabled weight 128
server SRV_39 127.0.0.1:8080 check disabled weight 128
server SRV_40 127.0.0.1:8080 check disabled weight 128
server SRV_41 127.0.0.1:8080 check disabled weight 128
server SRV_42 127.0.0.1:8080 check disabled weight 128
backend default-kubernetes-https
mode http
balance roundrobin
option forwardfor
server SRV_1 10.34.2.17:6443 no-check ssl verify none weight 128
server SRV_2 10.34.2.23:6443 no-check ssl verify none weight 128
server SRV_3 10.34.2.24:6443 no-check ssl verify none weight 128
server SRV_4 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_5 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_6 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_7 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_8 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_9 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_10 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_11 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_12 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_13 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_14 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_15 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_16 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_17 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_18 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_19 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_20 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_21 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_22 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_23 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_24 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_25 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_26 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_27 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_28 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_29 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_30 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_31 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_32 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_33 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_34 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_35 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_36 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_37 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_38 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_39 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_40 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_41 127.0.0.1:6443 no-check disabled ssl verify none weight 128
server SRV_42 127.0.0.1:6443 no-check disabled ssl verify none weight 128
backend haproxy-controller-ingress-default-backend-port-1
mode http
balance roundrobin
option forwardfor
server SRV_1 10.239.32.2:8080 check weight 128
server SRV_2 127.0.0.1:8080 check disabled weight 128
server SRV_3 127.0.0.1:8080 check disabled weight 128
server SRV_4 127.0.0.1:8080 check disabled weight 128
server SRV_5 127.0.0.1:8080 check disabled weight 128
server SRV_6 127.0.0.1:8080 check disabled weight 128
server SRV_7 127.0.0.1:8080 check disabled weight 128
server SRV_8 127.0.0.1:8080 check disabled weight 128
server SRV_9 127.0.0.1:8080 check disabled weight 128
server SRV_10 127.0.0.1:8080 check disabled weight 128
server SRV_11 127.0.0.1:8080 check disabled weight 128
server SRV_12 127.0.0.1:8080 check disabled weight 128
server SRV_13 127.0.0.1:8080 check disabled weight 128
server SRV_14 127.0.0.1:8080 check disabled weight 128
server SRV_15 127.0.0.1:8080 check disabled weight 128
server SRV_16 127.0.0.1:8080 check disabled weight 128
server SRV_17 127.0.0.1:8080 check disabled weight 128
server SRV_18 127.0.0.1:8080 check disabled weight 128
server SRV_19 127.0.0.1:8080 check disabled weight 128
server SRV_20 127.0.0.1:8080 check disabled weight 128
server SRV_21 127.0.0.1:8080 check disabled weight 128
server SRV_22 127.0.0.1:8080 check disabled weight 128
server SRV_23 127.0.0.1:8080 check disabled weight 128
server SRV_24 127.0.0.1:8080 check disabled weight 128
server SRV_25 127.0.0.1:8080 check disabled weight 128
server SRV_26 127.0.0.1:8080 check disabled weight 128
server SRV_27 127.0.0.1:8080 check disabled weight 128
server SRV_28 127.0.0.1:8080 check disabled weight 128
server SRV_29 127.0.0.1:8080 check disabled weight 128
server SRV_30 127.0.0.1:8080 check disabled weight 128
server SRV_31 127.0.0.1:8080 check disabled weight 128
server SRV_32 127.0.0.1:8080 check disabled weight 128
server SRV_33 127.0.0.1:8080 check disabled weight 128
server SRV_34 127.0.0.1:8080 check disabled weight 128
server SRV_35 127.0.0.1:8080 check disabled weight 128
server SRV_36 127.0.0.1:8080 check disabled weight 128
server SRV_37 127.0.0.1:8080 check disabled weight 128
server SRV_38 127.0.0.1:8080 check disabled weight 128
server SRV_39 127.0.0.1:8080 check disabled weight 128
server SRV_40 127.0.0.1:8080 check disabled weight 128
server SRV_41 127.0.0.1:8080 check disabled weight 128
server SRV_42 127.0.0.1:8080 check disabled weight 128
backend kubernetes-dashboard-kubernetes-dashboard-https
mode http
balance roundrobin
option forwardfor
server SRV_1 10.239.160.7:8443 no-check ssl alpn h2,http/1.1 verify none weight 128
server SRV_2 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_3 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_4 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_5 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_6 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_7 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_8 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_9 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_10 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_11 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_12 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_13 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_14 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_15 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_16 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_17 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_18 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_19 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_20 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_21 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_22 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_23 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_24 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_25 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_26 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_27 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_28 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_29 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_30 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_31 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_32 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_33 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_34 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_35 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_36 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_37 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_38 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_39 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_40 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_41 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
server SRV_42 127.0.0.1:8443 no-check disabled ssl alpn h2,http/1.1 verify none weight 128
Please note, that the ingress controllers above are installed in separate clusters (to avoid ports, etc. conflicts in the cluster, and check more handly). If needed though, I can install both controllers in parallel inside one cluster.
@ivanmatmati please let me know if I need to provide any additional details (configurations, cluster instances where it can be checked, etc.)
We have investigated further the case you submitted.It turns out that it has all to do with these two headers we mentioned previously. Up to version 2.3, haproxy supported the kind of configuration you use (systematic addition of upgrade connection headers). This explains why it worked with jcmorais which is behind in term of version. Since version 2.4, the haproxy team made some optimization including h1/h2 support by simplifying code and improving performance. It appeared that it's uncommon and cumbersome to manage a connection upgrade with payload so that this possibility was discarded. Now, what are your solutions:
Thank you for the investigation. Considering the conditions, this issue can be closed.
We've stumbled upon an issue: connection upgrade requests are not handled by haproxytech ingress controller.
Controller options:
The ingress resource:
Should we adjust anything in the controller configuration or ingress resource, to get it working?
The alternative controller
"jcmoraisjr/haproxy-ingress"
works fine for the same case, in the same cluster — so it's neither a network issue nor filtered connections. This case is pretty critical for us, since we seem cannot use port-forward requests now. Reproduced on the latest version, although earlier version are also affected.