hjacobs / kube-ops-view

Kubernetes Operational View - read-only system dashboard for multiple K8s clusters
https://kubernetes-operational-view.readthedocs.io/
GNU General Public License v3.0
1.82k stars 259 forks source link

Working behind haproxy v.1.5.18 #255

Open maprager opened 4 years ago

maprager commented 4 years ago

When routing via an haproxy v1.5.18 - the browser seems to get stuck - without showing the cluster. It appears because ( this is my guess ) - the /events call is never ending - and doesn't seem to close. Does anyone have a good answer to solve this ?

hjacobs commented 4 years ago

@maprager would this HAProxy configuration for server-sent-events (SSE) (/events is nothing else) help?

maprager commented 4 years ago

unfortunately - this did not help... I have this in the haproxy: defaults mode http log global option httplog option tcplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 20s timeout queue 1m timeout connect 30s timeout client 50s timeout server 50s timeout check 20s timeout client-fin 30s maxconn 3000

with the backend configure thus:

All the backends

backend eks_ingress_be_kubedb balance roundrobin option httplog timeout tunnel 10h http-request set-header host kube-ops-view.router.blah.blah.blah server kubedb kube-ops-view.router.blah.blah.blah:80 weight 10 check port 80

megabreit commented 4 years ago

@maprager Does this happen after quite some time? Or directly after starting the pod? Sounds a bit like it's similar to issue #251 or #240 I see this on Openshift 3.11 which comes with ha-proxy by default, even though with a newer version 1.8.17.

maprager commented 4 years ago

hi - this happens immediatly when trying to access the pod via haproxy - the pod itself starts up ok. I am running without the redis container - on purpose.

maprager commented 4 years ago

running directly via tunnel works fine - just via haproxy doesn't....

megabreit commented 4 years ago

Hm... then it's probably not the same issue. All I can say that it's working with haproxy 1.8 in Openshift. But there are reasons for running such an old version... hopefully.

maprager commented 4 years ago

Can you send post the portion of 1.8 config – as I might upgrade the version here.

From: Armin Kunaschik notifications@github.com Sent: Friday, 24 January 2020 1:57 To: hjacobs/kube-ops-view kube-ops-view@noreply.github.com Cc: Mark Prager mprager@synamedia.com; Mention mention@noreply.github.com Subject: Re: [hjacobs/kube-ops-view] Working behind haproxy v.1.5.18 (#255)

Hm... then it's probably not the same issue. All I can say that it's working with haproxy 1.8 in Openshift. But there are reasons for running such an old version... hopefully.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://gbr01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhjacobs%2Fkube-ops-view%2Fissues%2F255%3Femail_source%3Dnotifications%26email_token%3DAH7J2IS2ZTUCZTL2FEJOOUTQ7IVGHA5CNFSM4KFYMJ3KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJZJBVI%23issuecomment-577933525&data=02%7C01%7Cmprager%40synamedia.com%7Cadc42255322c4138203808d7a05ffe4e%7Cecdd899a33be4c3391e41f1144fc2f56%7C0%7C0%7C637154206456401124&sdata=2MkYRhP8oO%2F2FQtUlFTIZHB%2B274CN8mV72sssMVrvDQ%3D&reserved=0, or unsubscribehttps://gbr01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAH7J2IVYDVS3HH52LPDGPI3Q7IVGHANCNFSM4KFYMJ3A&data=02%7C01%7Cmprager%40synamedia.com%7Cadc42255322c4138203808d7a05ffe4e%7Cecdd899a33be4c3391e41f1144fc2f56%7C0%7C0%7C637154206456411116&sdata=cIrXqUPJA5fAABJ%2F2ya%2BJlXGbwzsBpbjWvEcS2MQI%2Bw%3D&reserved=0.

megabreit commented 4 years ago

I'm no haproxy guy, mine is generated by Openshift. Hopefully, I found all the necessary parts:

<snip>
global
  maxconn 20000

  daemon
  ca-base /etc/ssl
  crt-base /etc/ssl
  # TODO: Check if we can get reload to be faster by saving server state.
  # server-state-file /var/lib/haproxy/run/haproxy.state
  stats socket /var/lib/haproxy/run/haproxy.sock mode 600 level admin expose-fd listeners
  stats timeout 2m

  # Increase the default request size to be comparable to modern cloud load balancers (ALB: 64kb), affects
  # total memory use when large numbers of connections are open.
  tune.maxrewrite 8192
  tune.bufsize 32768

  # Prevent vulnerability to POODLE attacks
  ssl-default-bind-options no-sslv3

# The default cipher suite can be selected from the three sets recommended by https://wiki.mozilla.org/Security/Server_Side_TLS,
# or the user can provide one using the ROUTER_CIPHERS environment variable.
# By default when a cipher set is not provided, intermediate is used.
  # Intermediate cipher suite (default) from https://wiki.mozilla.org/Security/Server_Side_TLS
  tune.ssl.default-dh-param 2048
  ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

defaults
  maxconn 20000

  # Add x-forwarded-for header.

  # To configure custom default errors, you can either uncomment the
  # line below (server ... 127.0.0.1:8080) and point it to your custom
  # backend service or alternatively, you can send a custom 503 error.
  #
  # server openshift_backend 127.0.0.1:8080
  errorfile 503 /var/lib/haproxy/conf/error-page-503.http

  timeout connect 5s
  timeout client 30s
  timeout client-fin 1s
  timeout server 30s
  timeout server-fin 1s
  timeout http-request 10s
  timeout http-keep-alive 300s

  # Long timeout for WebSocket connections.
  timeout tunnel 1h

frontend public

  bind :80
  mode http
  tcp-request inspect-delay 5s
  tcp-request content accept if HTTP
  monitor-uri /_______internal_router_healthz

  # Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
  http-request del-header Proxy

  # DNS labels are case insensitive (RFC 4343), we need to convert the hostname into lowercase
  # before matching, or any requests containing uppercase characters will never match.
  http-request set-header Host %[req.hdr(Host),lower]

  # check if we need to redirect/force using https.
  acl secure_redirect base,map_reg(/var/lib/haproxy/conf/os_route_http_redirect.map) -m found
  redirect scheme https if secure_redirect

  use_backend %[base,map_reg(/var/lib/haproxy/conf/os_http_be.map)]

  default_backend openshift_default

# public ssl accepts all connections and isn't checking certificates yet certificates to use will be
# determined by the next backend in the chain which may be an app backend (passthrough termination) or a backend
# that terminates encryption in this router (edge)
frontend public_ssl

  bind :443
  tcp-request  inspect-delay 5s
  tcp-request content accept if { req_ssl_hello_type 1 }

  # if the connection is SNI and the route is a passthrough don't use the termination backend, just use the tcp backend
  # for the SNI case, we also need to compare it in case-insensitive mode (by converting it to lowercase) as RFC 4343 says
  acl sni req.ssl_sni -m found
  acl sni_passthrough req.ssl_sni,lower,map_reg(/var/lib/haproxy/conf/os_sni_passthrough.map) -m found
  use_backend %[req.ssl_sni,lower,map_reg(/var/lib/haproxy/conf/os_tcp_be.map)] if sni sni_passthrough

  # if the route is SNI and NOT passthrough enter the termination flow
  use_backend be_sni if sni

  # non SNI requests should enter a default termination backend rather than the custom cert SNI backend since it
  # will not be able to match a cert to an SNI host
  default_backend be_no_sni

# Plain http backend or backend with TLS terminated at the edge or a
# secure backend with re-encryption.
backend be_edge_http:ocp-ops-view:kube-ops-view
  mode http
  option redispatch
  option forwardfor
  balance leastconn

  timeout check 5000ms
  http-request set-header X-Forwarded-Host %[req.hdr(host)]
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto-Version h2 if { ssl_fc_alpn -i h2 }
  http-request add-header Forwarded for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)];proto-version=%[req.hdr(X-Forwarded-Proto-Version)]
  cookie e1a16e62e3813c8e0c40999b324731ce insert indirect nocache httponly secure
  server pod:kube-ops-view-6cf6d4d6fb-9rxxx:kube-ops-view:10.x.x.x:8080 10.x.x.x:8080 cookie a67b9db2c182c5109f2999b487f568cf weight 256
<snip>

There is a hardware load balancer in front of Openshift, haproxy is used as Ingress router, there are 3 instances running. Self signed certificates are used on the frontend, ocp-ops-view is edge terminated. Probably your config differs at certain parts.