Closed punkprzemo closed 4 years ago
Hi I have tried reproducing this with latest controller version 1.4.6
at the time of writing and configured two ingresses one with ssl-passthrough
enabled and the other not enabled and still have them both work correctly which means I get the expected http/s reponses.
Can you describe your environment (controller version, configmap, ingress objects) and tell us what you mean by
http ingresses stops working
Thanks
By http ingresses stops working
I mean http mode
ingress backends stop working. (ssl-passthrough when enabled should proxy traffic in TCP mode)
I'm using haproxytech/kubernetes-ingress:1.4.6
Kubernetes 1.18.3
Ingress controller is installed using helm Chart. Ingress controller container is started with below options:
spec:
containers:
- args:
- --default-ssl-certificate=kube-ingress/ingress-wildcard-cert
- --configmap=kube-ingress/ingress-private-kubernetes-ingress
- --default-backend-service=kube-ingress/ingress-private-kubernetes-ingress-default-backend
- --log=info
And as a hostPort pod
ports:
- containerPort: 1042
hostPort: 1042
name: healthz
protocol: TCP
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
- containerPort: 1024
hostPort: 1024
name: stat
protocol: TCP
Below is example of working http-mode
ingress I have:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whoami
namespace: whoami
spec:
rules:
- host: whoami.mydomain.internal
http:
paths:
- backend:
serviceName: whoami
servicePort: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- whoami.mydomain.internal
Then i create ssl-passthrough enabled ingress.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
name: dummy
spec:
rules:
- host: dummy-ssl-passthrough.mydomain.internal
http:
paths:
- backend:
serviceName: dummy-ssl-passthrough
servicePort: 666
path: /
After creation of ssl-passthrough ingress the ingress from first example https://whoami.mydomain.internal
returns below error in chrome browser
This site can’t provide a secure connectionwhoami.mydomain.internal sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
Interesting part how ingress controller configuration is changing:
Before adding any ssl-passthrough
enabled ingress i have below in /etc/haproxy/haproxy.cfg
frontend https
mode http
bind 0.0.0.0:443 name bind_1 crt /etc/haproxy/certs ssl alpn h2,http/1.1
bind :::443 name bind_2 crt /etc/haproxy/certs ssl v4v6 alpn h2,http/1.1
...
After adding any ssl-passthrough
enabled ingress i have below in /etc/haproxy/haproxy.cfg
frontend https
mode http
bind 127.0.0.1:8443 name bind_1
http-request set-var(txn.host) req.hdr(Host)
http-request set-var(txn.base) base
http-request set-header X-Forwarded-Proto https if { ssl_fc }
use_backend whoami-whoami-80 if { req.hdr(host),field(1,:) -i whoami.mydomain.internal } { path_beg / }
use_backend vault-vault-ui-8200 if { req.hdr(host),field(1,:) -i vault.mydomain.internal } { path_beg / }
default_backend kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080
frontend ssl
mode tcp
bind 0.0.0.0:443 name bind_1
bind :::443 name bind_2 v4v6
log-format '%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %[var(sess.sni)]'
tcp-request inspect-delay 5000
tcp-request content set-var(sess.sni) req_ssl_sni
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend kube-ingress-dummy-ssl-passthrough-666 if { req_ssl_sni -i dummy-ssl-passthrough.mydomain.internal }
default_backend https
frontend http mode
port is changing from 0.0.0.0:443 to 127.0.0.1:8443 ...
Edit: Additionaly haproxy ingress logs with debug loglevel
Adding ingress with ssl-passthrough enabled:
2020/08/07 15:44:54 DEBUG service.go:151 Ingress 'kube-ingress/dummy': Creating new backend 'kube-ingress-dummy-ssl-passthrough-666'
2020/08/07 15:44:54 DEBUG backend-annotations.go:83 Backend 'kube-ingress-dummy-ssl-passthrough-666': Configuring 'load-balance' annotation
2020/08/07 15:44:54 DEBUG service.go:163 Ingress 'kube-ingress/dummy': Applying annotations changes to backend 'kube-ingress-dummy-ssl-passthrough-666'
2020/08/07 15:44:54 ERROR controller.go:145 ingress servicePort(Str: , Int: 666) for serviceName 'dummy-ssl-passthrough' not found
2020/08/07 15:44:54 INFO https.go:165 Enabling ssl-passthrough
2020/08/07 15:44:54 DEBUG backend-switching.go:53 Updating Backend Switching rules
2020/08/07 15:44:54 INFO controller.go:213 HAProxy reloaded
Removing Ingress with ssl-passthrough enabled:
2020/08/07 15:45:42 ERROR controller.go:145 ingress servicePort(Str: , Int: 666) for serviceName 'dummy-ssl-passthrough' not found
2020/08/07 15:45:42 INFO https.go:171 Disabling ssl-passthrough
2020/08/07 15:45:42 DEBUG backend-switching.go:53 Updating Backend Switching rules
2020/08/07 15:45:42 DEBUG backend-switching.go:134 Deleting backend 'kube-ingress-dummy-ssl-passthrough-666'
2020/08/07 15:45:42 INFO controller.go:213 HAProxy reloaded
Regards Przemek
Thanks for the details.
Yes I see a bug there: after enabling ssl-passthrough
|crt" and "ssl" directives disappear from the "frontend HTTPS" this is not the intended behavior.
I will submit a fix soon, in the meantime killing the pod to get a newly created one seems to be a workaround.
I have pushed a fix on master https://github.com/haproxytech/kubernetes-ingress/commit/bad8b5aac2a893f8b6ca6a1879963cd201f277ab
You can try it via the master image haproxytech/kubernetes-ingress:dev
Hello, I can confirm that the issue with disappearing "crt/ssl" directives is fixed. Thank you very much for this quick fix @Mo3m3n .
But it seems that i found one more issue when ssl-passthrough
is enabled .
I'm using ACL with whitelist: 10.10.10.0/24
in configmap. When ssl-passthrough
is enabled i'm starting to get 403
if i'm trying to get something from inside this whitelisted network.
Traffic outside of the whitelisted network have below error instead of 403
curl https://whoami.mydomain.internal/ -Ivvv
* Trying 10.10.10.121...
* TCP_NODELAY set
* Connected to whoami.mydomain.internal (10.10.10.121) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to whoami.mydomain.internal:443
* stopped the pause stream!
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to whoami.mydomain.internal:443
With ssl-passthrough
disabled, ACL is working correctly which means that i can get whoami.mydomain.internal
within whitelisted network and i'm getting 403
if i am outside of that network.
Below is frontend config from /etc/haproxy/haproxy.cfg
ater enabling ssl-passthrough
frontend http
mode http
bind 0.0.0.0:80 name bind_1
bind :::80 v4v6 name bind_2
http-request deny deny_status 403 if { req.hdr(Host) -f /etc/haproxy/maps/7268392025017413466.lst } !{ src -f /etc/haproxy/maps/2445122817577372530.lst }
http-request set-var(txn.host) req.hdr(Host)
http-request set-var(txn.base) base
http-request redirect scheme https code 302 if { req.hdr(host),field(1,:) -f /etc/haproxy/maps/16657147599902430202.lst } !{ ssl_fc }
use_backend whoami-whoami-80 if { req.hdr(host),field(1,:) -i whoami.mydomain.internal } { path_beg / }
use_backend vault-vault-ui-8200 if { req.hdr(host),field(1,:) -i vault.mydomain.internal } { path_beg / }
default_backend kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080
frontend https
mode http
bind 127.0.0.1:8443 name bind_1 crt /etc/haproxy/certs ssl alpn h2,http/1.1
bind 127.0.0.1:8443 name bind_2 crt /etc/haproxy/certs ssl v4v6 alpn h2,http/1.1
http-request deny deny_status 403 if { req.hdr(Host) -f /etc/haproxy/maps/7268392025017413466.lst } !{ src -f /etc/haproxy/maps/2445122817577372530.lst }
http-request set-var(txn.host) req.hdr(Host)
http-request set-var(txn.base) base
http-request set-header X-Forwarded-Proto https if { ssl_fc }
use_backend whoami-whoami-80 if { req.hdr(host),field(1,:) -i whoami.mydomain.internal } { path_beg / }
use_backend vault-vault-ui-8200 if { req.hdr(host),field(1,:) -i vault.mydomain.internal } { path_beg / }
default_backend kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080
frontend ssl
mode tcp
bind 0.0.0.0:443 name bind_1
bind :::443 name bind_2 v4v6
log-format '%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %[var(sess.sni)]'
tcp-request content reject if { req_ssl_sni -f /etc/haproxy/maps/7268392025017413466.lst } !{ src -f /etc/haproxy/maps/2445122817577372530.lst }
tcp-request inspect-delay 5000
tcp-request content set-var(sess.sni) req_ssl_sni
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend kube-ingress-dummy-ssl-passthrough-666 if { req_ssl_sni -i dummy-ssl-passthrough.mydomain.internal }
default_backend https
# cat /etc/haproxy/maps/2445122817577372530.lst
10.10.10.0/24
Regards Przemek
Thanks @punkprzemo for the continuous reports
ssl-passthrough was definitely quite buggy :)
The behavior you reported was fixed with https://github.com/haproxytech/kubernetes-ingress/commit/77fb077f6d52cc57fa859287622c0b76c4668fcf
You can re-pull haproxytech/kubernetes-ingress:dev
and test it.
Thank you @Mo3m3n for this fix. Great job. Really appreciated .
I have deployed latest haproxytech/kubernetes-ingress:dev
on my lab cluster and I can confirm that ACL rules are still working even if i enable ssl-passthrough
ingress. Nice one.
The only difference now is in response message for ACL protected endpoints when ssl-passthrough
is enabled
http
curl http://whoami.mydomain.internal
<html><body><h1>403 Forbidden</h1>
Request forbidden by administrative rules.
</body></html>
https
curl https://whoami.mydomain.internal
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to whoami.mydomain.internal:443
But this is not a problem . Finally ssl-passthrough: true
annotation won't break my other ingresses and ACL rules anymore which is what I want.
I will rollout this haproxytech/kubernetes-ingress:dev
image tomorrow on my second cluster where I use more haproxy options/annotations and I will see if nothing more breaks because of enabling ssl-passthrough
.
Regards
Hi @Mo3m3n
I've just tested haproxytech/kubernetes-ingress:dev
image on my second cluster. I found that --namespace-whitelist
functionality is currently broken in haproxytech/kubernetes-ingress:dev
image and this is crucial functionality for me.
I have two haproxy kubernetes ingress controllers on my second cluster. One of them is using --namespace-whitelist=NAMESPACE_NAME
as a arguments to watch for ingresses only in some namespaces . When i redeployed this ingress with haproxytech/kubernetes-ingress:dev
image all backends disappear from /etc/haproxy/haproxy.cfg
.
Below full /etc/haproxy/haproxy.cfg
file after redeploying ingress controller with image haproxytech/kubernetes-ingress:dev
and --namespace-whitelist=whoami
as argument in container.
/ # cat /etc/haproxy/haproxy.cfg
# _version=1
# HAProxy Technologies
# https://www.haproxy.com/
#
# this file is not meant to be changed directly
# it is under haproxy ingress controller management
#
global
daemon
master-worker
pidfile /var/run/haproxy.pid
server-state-file global
server-state-base /var/state/haproxy/
stats socket /var/run/haproxy-runtime-api.sock level admin expose-fd listeners
stats timeout 1m
tune.ssl.default-dh-param 2048
log 127.0.0.1:514 local0 notice
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
ssl-default-bind-options no-sslv3 no-tls-tickets no-tlsv10
defaults
log global
log-format '%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs "%HM %[var(txn.base)] %HV"'
option redispatch
option dontlognull
option http-keep-alive
timeout http-request 5s
timeout connect 5s
timeout client 50s
timeout queue 5s
timeout server 50s
timeout tunnel 1h
timeout http-keep-alive 1m
load-server-state-from-file global
frontend https
mode http
bind 0.0.0.0:443 name bind_1
bind :::443 v4v6 name bind_2
http-request set-var(txn.base) base
http-request set-var(txn.host) req.hdr(Host)
http-request set-header X-Forwarded-Proto https if { ssl_fc }
default_backend default_backend
frontend http
bind 0.0.0.0:80 name bind_1
bind :::80 v4v6 name bind_2
http-request set-var(txn.base) base
http-request set-var(txn.host) req.hdr(Host)
mode http
default_backend default_backend
backend default_backend
mode http
frontend healthz
bind 0.0.0.0:1042 name healtz_1
mode http
monitor-uri /healthz
option dontlog-normal
frontend stats
mode http
bind *:1024
http-request use-service prometheus-exporter if { path /metrics }
stats enable
stats uri /
stats refresh 10s
There are no backend and frontends configuration is also quite empty.
Regards
Hi @punkprzemo
I am not able to reproduce this, using --namespace-whitelist
populates haproxy with backends from corresponding namespace.
You have anything in your logs ?
I have the same behavior on LAB cluster. Ingress controller is started with below args
containers:
- args:
- --default-ssl-certificate=kube-ingress/ingress-wildcard-cert
- --configmap=kube-ingress/ingress-private-kubernetes-ingress
- --default-backend-service=kube-ingress/ingress-private-kubernetes-ingress-default-backend
- --namespace-whitelist=whoami
- --log=debug
below are logs when pod is started with haproxytech/kubernetes-ingress:dev
image
2020/08/13 10:54:50
_ _ _ ____
| | | | / \ | _ \ _ __ _____ ___ _
| |_| | / _ \ | |_) | '__/ _ \ \/ / | | |
| _ |/ ___ \| __/| | | (_) > <| |_| |
|_| |_/_/ \_\_| |_| \___/_/\_\\__, |
_ __ _ |___/ ___ ____
| |/ / _| |__ ___ _ __ _ __ ___| |_ ___ ___ |_ _/ ___|
| ' / | | | '_ \ / _ \ '__| '_ \ / _ \ __/ _ \/ __| | | |
| . \ |_| | |_) | __/ | | | | | __/ || __/\__ \ | | |___
|_|\_\__,_|_.__/ \___|_| |_| |_|\___|\__\___||___/ |___\____|
2020/08/13 10:54:50 HAProxy Ingress Controller v1.4.5 77fb077.dev
2020/08/13 10:54:50 Build from: https://mo3m3n@github.com/haproxytech/kubernetes-ingress
2020/08/13 10:54:50 Build date: 2020-08-11T14:30:27
2020/08/13 10:54:50 ConfigMap: kube-ingress/ingress-private-kubernetes-ingress
2020/08/13 10:54:50 Ingress class:
2020/08/13 10:54:50 Publish service:
2020/08/13 10:54:50 Default backend service: kube-ingress/ingress-private-kubernetes-ingress-default-backend
2020/08/13 10:54:50 Default ssl certificate: kube-ingress/ingress-wildcard-cert
2020/08/13 10:54:50 Controller sync period: 5s
2020/08/13 10:54:50 Kubernetes Shared Informer default resync period: 1m0s
2020/08/13 10:54:50 controller.go:265 Running with HA-Proxy version 2.1.7 2020/06/09 - https://haproxy.org/
2020/08/13 10:54:50 INFO controller.go:270 Starting HAProxy with /etc/haproxy/haproxy.cfg
2020/08/13 10:54:50 INFO controller.go:275 Running on ingress-private-kubernetes-ingress-cqdr2
2020/08/13 10:54:50 INFO controller.go:104 Running on Kubernetes version: v1.18.3 linux/amd64
2020/08/13 10:54:50 DEBUG monitor.go:38 Executing syncPeriod every 5s
[NOTICE] 225/105450 (19) : New worker #1 (20) forked
End of logs(no whoami related logs)
If I switch back to 1.4.6
image then backends from --namespace-whitelist=whoami
are working and appears in logs.
2020/08/13 11:15:05
_ _ _ ____
| | | | / \ | _ \ _ __ _____ ___ _
| |_| | / _ \ | |_) | '__/ _ \ \/ / | | |
| _ |/ ___ \| __/| | | (_) > <| |_| |
|_| |_/_/ \_\_| |_| \___/_/\_\\__, |
_ __ _ |___/ ___ ____
| |/ / _| |__ ___ _ __ _ __ ___| |_ ___ ___ |_ _/ ___|
| ' / | | | '_ \ / _ \ '__| '_ \ / _ \ __/ _ \/ __| | | |
| . \ |_| | |_) | __/ | | | | | __/ || __/\__ \ | | |___
|_|\_\__,_|_.__/ \___|_| |_| |_|\___|\__\___||___/ |___\____|
2020/08/13 11:15:05 HAProxy Ingress Controller v1.4.6 39b5038
2020/08/13 11:15:05 Build from: git@github.com:haproxytech/kubernetes-ingress.git
2020/08/13 11:15:05 Build date: 2020-07-23T18:17:23
2020/08/13 11:15:05 ConfigMap: kube-ingress/ingress-private-kubernetes-ingress
2020/08/13 11:15:05 Ingress class:
2020/08/13 11:15:05 Publish service:
2020/08/13 11:15:05 Default backend service: kube-ingress/ingress-private-kubernetes-ingress-default-backend
2020/08/13 11:15:05 Default ssl certificate: kube-ingress/ingress-wildcard-cert
2020/08/13 11:15:05 Controller sync period: 5s
2020/08/13 11:15:05 controller.go:254 Running with HA-Proxy version 2.1.7 2020/06/09 - https://haproxy.org/
2020/08/13 11:15:05 INFO controller.go:259 Starting HAProxy with /etc/haproxy/haproxy.cfg
2020/08/13 11:15:05 INFO controller.go:264 Running on ingress-private-kubernetes-ingress-twngb
2020/08/13 11:15:05 INFO controller.go:95 Running on Kubernetes version: v1.18.3 linux/amd64
2020/08/13 11:15:05 DEBUG monitor.go:35 Executing syncPeriod every 5s
[NOTICE] 225/111505 (18) : New worker #1 (19) forked
2020/08/13 11:15:06 DEBUG monitor.go:90 Configmap processed
2020/08/13 11:15:11 DEBUG service.go:151 Ingress 'kube-ingress/DefaultService': Creating new backend 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080'
2020/08/13 11:15:11 DEBUG backend-annotations.go:83 Backend 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080': Configuring 'load-balance' annotation
2020/08/13 11:15:11 DEBUG backend-annotations.go:83 Backend 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080': Configuring 'forwarded-for' annotation
2020/08/13 11:15:11 DEBUG service.go:163 Ingress 'kube-ingress/DefaultService': Applying annotations changes to backend 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080'
2020/08/13 11:15:11 DEBUG service.go:185 Using service 'kube-ingress/ingress-private-kubernetes-ingress-default-backend' as default backend
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_mNWhb'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_HhiJt'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_HUDKb'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_XITUa'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_6mytf'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_nAXsL'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_MUckg'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_kBJwd'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_mKxpZ'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_FjRR7'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_qFrxA'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_OPGVW'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_5hlyz'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_1Dy5I'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_0f1q1'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_kp3Mz'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_uqthb'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_p9o5z'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_NKKGt'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_w4KCO'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_2FZIq'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_OQLRD'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_SqAgV'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_9QuLK'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_ogPv9'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_h0t7r'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_21jOK'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_egVFw'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_MN09b'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_FcAqU'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_ZXStf'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_4WMOa'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_WBHcq'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_qcykG'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_Wu73x'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_mRiGd'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_QHGxW'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_40OK2'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_BmVoC'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_BRm8C'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_dJQ4X'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_FiRff'
2020/08/13 11:15:11 DEBUG service.go:151 Ingress 'whoami/whoami': Creating new backend 'whoami-whoami-80'
2020/08/13 11:15:11 DEBUG backend-annotations.go:83 Backend 'whoami-whoami-80': Configuring 'load-balance' annotation
2020/08/13 11:15:11 DEBUG backend-annotations.go:83 Backend 'whoami-whoami-80': Configuring 'forwarded-for' annotation
2020/08/13 11:15:11 DEBUG service.go:163 Ingress 'whoami/whoami': Applying annotations changes to backend 'whoami-whoami-80'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_Vw7TK'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_JcS7x'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_5uAOP'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_RrbqQ'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_uiL6W'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_YSoNT'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_otvme'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_jgCll'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_DuLWV'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_6sKdQ'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_mTrhb'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_cvWZu'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_ts3uW'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_6PgpO'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_m1Rbn'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_KMhmp'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_ByO7X'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_IbbHa'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_unyfn'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_nwpAJ'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_TQdiu'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_1SWl8'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_wnI2x'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_dZUGu'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_wpvV6'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_N41Ip'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_MNqPY'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_dA73t'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_ZYWuA'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_HRpLZ'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_FduRY'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_VEfcc'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_hNIck'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_FFRUJ'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_BmoJC'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_XtUK5'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_BwCds'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_zB9ss'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_2O6aE'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_4VyDU'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_4fdnA'
2020/08/13 11:15:11 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_14Byv'
2020/08/13 11:15:11 WARNING https.go:147 secret [whoami/] does not exist, ignoring.
2020/08/13 11:15:11 DEBUG https.go:94 Using certtificate from secret 'kube-ingress/ingress-wildcard-cert'
2020/08/13 11:15:11 DEBUG requests.go:85 Updating HTTP request rules for HTTP and HTTPS frontends
2020/08/13 11:15:11 DEBUG backend-switching.go:53 Updating Backend Switching rules
2020/08/13 11:15:11 DEBUG backend-switching.go:134 Deleting backend 'default_backend'
2020/08/13 11:15:11 INFO controller.go:213 HAProxy reloaded
Regards
2020/08/13 11:15:11 DEBUG service.go:151 Ingress 'whoami/whoami': Creating new backend 'whoami-whoami-80
the controller seems to be doing the job
The config version you have shared previously is # _version=1
(at the top of the config file), this is created before any sync with k8s. So maybe you checked the configuration a bit too early ? to make sure you have the updated config you can look at it after the second reload.
If the problem persists, first we need to check if this is related to ssl-passthrough, so if it is the case (that is problem happening only when ssl-passthrough is enabled) then we continue debugging it here otherwise we would need a different github issue,
ssl-passthrough
is not enabled anywhere.
I'm changing image by editing daemonset definition.
2020/08/13 11:15:11 DEBUG service.go:151 Ingress 'whoami/whoami': Creating new backend 'whoami-whoami-80
log is when pod is started with haproxytech/kubernetes-ingress:1.4.6
image and with this image --namespace-whitelist
is working correctly and i'm able to get ingress https://whoami.mydoamin.internal.
Simple switching the image in daemonset to haproxytech/kubernetes-ingress:dev
breaks the ingress controller with --namespace-whitelist=NAMESPACE
set as argument.
What is more ingress controller doesn't break if there is no --namespace-whitelist=NAMESPACE
option in container args and I start pod with haproxytech/kubernetes-ingress:dev
.
Regarding config versioning. There is no config reload with haproxytech/kubernetes-ingress:dev
image and --namespace-whitelist=whoami set , even if i delete and recreate whoami ingress
.
Maybe --namespace-whitelist
option is different bug that appear somewhere in dev
version of the image before even your fixes for ssl-passthrough
where introduced. I have never tested dev
image till today on my cluster where i'm using --namespace-whitelist
. I only tested it on my LAB env so far.
I will try to test with previous versions of dev
image to see if this issue was introduced somewhere, but i need to check if it is possible to pull the previous dev versions of the image.
@Mo3m3n
was that change 8d1c3fc from 16 days ago tested?
I have built two images . The one with 8d1c3fc commit and it's not working with --namespace-whitelist
I have also built an image from one commit before 8d1c3fc which is 17b53bd And that image is working with --namespace-whitelist
.
commit 8d1c3fc image logs
2020/08/13 14:53:02
_ _ _ ____
| | | | / \ | _ \ _ __ _____ ___ _
| |_| | / _ \ | |_) | '__/ _ \ \/ / | | |
| _ |/ ___ \| __/| | | (_) > <| |_| |
|_| |_/_/ \_\_| |_| \___/_/\_\\__, |
_ __ _ |___/ ___ ____
| |/ / _| |__ ___ _ __ _ __ ___| |_ ___ ___ |_ _/ ___|
| ' / | | | '_ \ / _ \ '__| '_ \ / _ \ __/ _ \/ __| | | |
| . \ |_| | |_) | __/ | | | | | __/ || __/\__ \ | | |___
|_|\_\__,_|_.__/ \___|_| |_| |_|\___|\__\___||___/ |___\____|
2020/08/13 14:53:02 HAProxy Ingress Controller v1.4.5 8d1c3fc.dev
2020/08/13 14:53:02 Build from: https://github.com/haproxytech/kubernetes-ingress.git
2020/08/13 14:53:02 Build date: 2020-08-13T14:24:59
2020/08/13 14:53:02 ConfigMap: kube-ingress/ingress-private-kubernetes-ingress
2020/08/13 14:53:02 Ingress class:
2020/08/13 14:53:02 Publish service:
2020/08/13 14:53:02 Default backend service: kube-ingress/ingress-private-kubernetes-ingress-default-backend
2020/08/13 14:53:02 Default ssl certificate: kube-ingress/ingress-wildcard-cert
2020/08/13 14:53:02 Controller sync period: 5s
2020/08/13 14:53:02 Kubernetes Shared Informer default resync period: 1m0s
2020/08/13 14:53:02 controller.go:265 Running with HA-Proxy version 2.1.8 2020/07/31 - https://haproxy.org/
2020/08/13 14:53:02 INFO controller.go:270 Starting HAProxy with /etc/haproxy/haproxy.cfg
2020/08/13 14:53:02 INFO controller.go:275 Running on ingress-private-kubernetes-ingress-pswrp
2020/08/13 14:53:02 INFO controller.go:104 Running on Kubernetes version: v1.18.3 linux/amd64
2020/08/13 14:53:02 DEBUG monitor.go:38 Executing syncPeriod every 5s
[NOTICE] 225/145302 (18) : New worker #1 (19) forked
End of ^ logs nothing more is comming
commit 17b53bd image pod logs
2020/08/13 15:02:35
_ _ _ ____
| | | | / \ | _ \ _ __ _____ ___ _
| |_| | / _ \ | |_) | '__/ _ \ \/ / | | |
| _ |/ ___ \| __/| | | (_) > <| |_| |
|_| |_/_/ \_\_| |_| \___/_/\_\\__, |
_ __ _ |___/ ___ ____
| |/ / _| |__ ___ _ __ _ __ ___| |_ ___ ___ |_ _/ ___|
| ' / | | | '_ \ / _ \ '__| '_ \ / _ \ __/ _ \/ __| | | |
| . \ |_| | |_) | __/ | | | | | __/ || __/\__ \ | | |___
|_|\_\__,_|_.__/ \___|_| |_| |_|\___|\__\___||___/ |___\____|
2020/08/13 15:02:35 HAProxy Ingress Controller v1.4.5 17b53bd.dev
2020/08/13 15:02:35 Build from: https://github.com/haproxytech/kubernetes-ingress.git
2020/08/13 15:02:35 Build date: 2020-08-13T14:39:23
2020/08/13 15:02:35 ConfigMap: kube-ingress/ingress-private-kubernetes-ingress
2020/08/13 15:02:35 Ingress class:
2020/08/13 15:02:35 Publish service:
2020/08/13 15:02:35 Default backend service: kube-ingress/ingress-private-kubernetes-ingress-default-backend
2020/08/13 15:02:35 Default ssl certificate: kube-ingress/ingress-wildcard-cert
2020/08/13 15:02:35 Controller sync period: 5s
2020/08/13 15:02:35 Kubernetes Shared Informer default resync period: 1m0s
2020/08/13 15:02:35 controller.go:265 Running with HA-Proxy version 2.1.8 2020/07/31 - https://haproxy.org/
2020/08/13 15:02:35 INFO controller.go:270 Starting HAProxy with /etc/haproxy/haproxy.cfg
2020/08/13 15:02:35 INFO controller.go:275 Running on ingress-private-kubernetes-ingress-5tspc
2020/08/13 15:02:35 INFO controller.go:104 Running on Kubernetes version: v1.18.3 linux/amd64
2020/08/13 15:02:35 DEBUG monitor.go:38 Executing syncPeriod every 5s
[NOTICE] 225/150235 (18) : New worker #1 (19) forked
2020/08/13 15:02:35 DEBUG monitor.go:104 Configmap processed
2020/08/13 15:02:45 DEBUG service.go:151 Ingress 'kube-ingress/DefaultService': Creating new backend 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080'
2020/08/13 15:02:45 DEBUG backend-annotations.go:84 Backend 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080': Configuring 'load-balance' annotation
2020/08/13 15:02:45 DEBUG backend-annotations.go:84 Backend 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080': Configuring 'forwarded-for' annotation
2020/08/13 15:02:45 DEBUG service.go:163 Ingress 'kube-ingress/DefaultService': Applying annotations changes to backend 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080'
2020/08/13 15:02:45 DEBUG service.go:185 Using service 'kube-ingress/ingress-private-kubernetes-ingress-default-backend' as default backend
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_2Tw2k'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_Y7BPU'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_wD78T'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_sspv2'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_cMTID'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_shBVv'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_0kdqJ'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_edgLK'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_PIWKZ'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_3ZyBS'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_e0VIU'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_FW8Vn'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_nxiop'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_4V6Nx'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_YED4t'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_1KvSb'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_KFHx9'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_DbKqZ'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_OIoDr'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_kFs3M'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_3HbvS'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_Iwwx6'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_UY1pb'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_wckLc'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_ubsdg'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_oJ4sj'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_MgYzV'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_cAmKC'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_ZtNqQ'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_xsQTa'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_fZG5k'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_MM86T'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_XShYD'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_iwmLg'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_16w1L'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_XH49i'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_LPbhZ'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_ItAQL'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_W5vCJ'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_uWeby'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_w7nXi'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'kube-ingress-ingress-private-kubernetes-ingress-default-backend-8080/SRV_pr1oE'
2020/08/13 15:02:45 DEBUG service.go:151 Ingress 'whoami/whoami': Creating new backend 'whoami-whoami-80'
2020/08/13 15:02:45 DEBUG backend-annotations.go:84 Backend 'whoami-whoami-80': Configuring 'load-balance' annotation
2020/08/13 15:02:45 DEBUG backend-annotations.go:84 Backend 'whoami-whoami-80': Configuring 'forwarded-for' annotation
2020/08/13 15:02:45 DEBUG service.go:163 Ingress 'whoami/whoami': Applying annotations changes to backend 'whoami-whoami-80'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_Sa0Vc'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_tcHy4'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_lhBr0'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_u3aAP'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_jkxpu'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_v8071'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_FTarp'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_4oU3Y'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_FEC8j'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_HBlxh'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_tS15W'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_UWQ6Q'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_xXEJz'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_9fDwK'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_3EPFy'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_g2qLk'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_IWySE'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_F4IeS'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_UAgDE'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_M8zYJ'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_QHU2I'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_8iEDa'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_RDRPw'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_k7wd9'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_l2yki'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_h4tNW'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_YXkFd'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_yO9QY'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_jooz4'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_17sr0'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_aSfDU'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_oIMNV'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_ynxJ3'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_Z7obY'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_naQAl'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_f3q0A'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_40Gz5'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_iGNkx'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_COE6v'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_0KDF1'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_tgQEJ'
2020/08/13 15:02:45 DEBUG service.go:82 Creating server 'whoami-whoami-80/SRV_VC0uB'
2020/08/13 15:02:45 WARNING https.go:66 secret [whoami/] does not exist, ignoring.
2020/08/13 15:02:45 DEBUG handler-secret.go:37 Using certificate from secret 'kube-ingress/ingress-wildcard-cert'
2020/08/13 15:02:45 DEBUG requests.go:85 Updating HTTP request rules for HTTP and HTTPS frontends
2020/08/13 15:02:45 DEBUG backend-switching.go:53 Updating Backend Switching rules
2020/08/13 15:02:45 DEBUG backend-switching.go:134 Deleting backend 'default_backend'
2020/08/13 15:02:45 INFO controller.go:224 HAProxy reloaded
So i think it's separate bug for --namespace-whitelist
option that was introduced with 8d1c3fc commit .
was that change 8d1c3fc from 16 days ago tested?
Yes it was tested, we can debug this in a different github issue to avoid confusion with ssl-passthough. So don't hesitate to just put the last post in a new issue and will answer you with more details
I have opened new issue #227 for --namespace-whitelist doesn't populate haproxy backends from corresponding namespace.
I think current #226 issue can be closed as @Mo3m3n commits for ssl-passthrough
fixed it.
Cool I have answered the namespace behavior in the corresponding github issue. Now back to this:
The only difference now is in response message for ACL protected endpoints when ssl-passthrough is enabled
Where you get a 403 after a http request and a ssl error after a https one. This is because the reject with a https request is done at the SSL layer, which means we have the following workflow:
I think decryption to be able to send back https 403 answer is not necessary at least for me (and can be resources expensive?) . So thank you for the explanation.
I'm happy with current status of the fix you made for ssl-passthrough
.
Current issue can be closed.
Great job. :+1:
Is there any way to disable the possibility to add some type of haproxy ingress annotations? Below is explanation what i mean. I have a multiuser kubernetes cluster. HAProxy ingress is used in http mode. If any user in any namespace set
ssl-passthrough: true
annotation on his service or ingress object than all cluster http ingresses stops working ! From my perspective it's huge security issue because any user who can crate services or ingress objects in the cluster can break whole http mode ingress-controller traffic.I can see that documentation says that annotation
ssl-passthrough
makes unavailable a number of the controller annotations (requiring HTTP mode)But how can I protect http ingresses from being break by this
ssl-passthrough:true
annotation set anywhere in the cluster by mistake or unaware user or attacker?