jcmoraisjr / haproxy-ingress

HAProxy Ingress
https://haproxy-ingress.github.io
Apache License 2.0
1.04k stars 270 forks source link

Istio integration #561

Closed irizzant closed 3 years ago

irizzant commented 4 years ago

Hello I am running haproxy-ingress-0.0.23 version of this chart with Istio 1.5.1. I'm looking for a way to setup a north-south proxy backed up by your ha-proxy and a est-west one backed up by Istio. Nginx Ingress controller is able to handle this (see https://github.com/istio/istio/issues/7776) by:

  1. adding traffic.sidecar.istio.io/includeInboundPorts: "" to nginx pod
  2. for each Ingress resource add:
    nginx.ingress.kubernetes.io/service-upstream: "true"
    nginx.ingress.kubernetes.io/upstream-vhost: your-app.the-namespace.svc.cluster.local

    which works like a charm in Istio and allows to use mtls between Nginx and target pods. I found your https://haproxy-ingress.github.io/docs/configuration/keys/#service-upstream configuration which is great and makes me fix the first point.

The problem is that using service-upstream only doesn't make ha-proxy forward traffic through Istio service mesh. I say this because as soon as I force mtls between pods the traffic drops and can't reach the Ingress destination Service anymore.

Any idea about how to mirror the Nginx configuration?

Attached is the haproxy config haproxy.cfg.txt, the workload I'm trying to test is Kuard.

Here is the haproxy ConfigMap used:

  service-upstream: "true"
  timeout-client: 30m
  timeout-server: 30m
  use-proxy-protocol: "false"
jcmoraisjr commented 4 years ago

Hi, if I understood you correctly upstream-vhost overrides the Host header in the incoming request to the backend? Please confirm this behaviour, I'll have a look on it.

jcmoraisjr commented 4 years ago

I'll have a look on it.

In the mean time give this custom backend config a chance:

  config-backend: http-request set-header host your-app.the-namespace.svc.cluster.local

This can be configured as a configmap option or ingress / service annotation (adding prefix).

irizzant commented 4 years ago

Hi @jcmoraisjr thank you, yes, as reported here

This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host, which forms part of the location block. This is useful if you need to call the upstream server by something other than $host

so it overrides the Host header with Kubernetes Service.

It looks like the suggested config-backend did the trick, with mtls kuard now reports:

GET / HTTP/1.1
Host: kuard-service.kuard.svc.cluster.local**
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.5
Cache-Control: no-cache
Content-Length: 0
Pragma: no-cache
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:75.0) Gecko/20100101 Firefox/75.0
X-B3-Parentspanid: 98adaff568ebbe0c
X-B3-Sampled: 1
X-B3-Spanid: 15153fdc32125747
X-B3-Traceid: 2e18e1b39af2f5c898adaff568ebbe0c
X-Envoy-Internal: true
X-Forwarded-Client-Cert: By=spiffe://cluster.local/ns/kuard/sa/default;Hash=...;Subject="";URI=spiffe://cluster.local/ns/ingress/sa/ingress-haproxy-ingress
X-Forwarded-For: 172.18.0.1
X-Forwarded-Proto: http
X-Request-Id: 34d4cbd9-0abf-9a57-85f9-aeb059e03f2d

You can see yourself the Host reports the Service name and X-Forwarded-Client_cert reporting Istio DNS certificate.

Is there any way to make haproxy Ingress controller to check a configuration variable, like istio-enabled or something like that, and automatically set:

Also, I noticed that Service ports are named incorrectly compared to Istio requirements: https://github.com/helm/charts/blob/fc74eac0cc62612ed3841d7b491f241a8b9860b5/incubator/haproxy-ingress/templates/controller-service.yaml#L43

They are named {{ .port }}-http while they should be the opposite http-{{.port}}. See https://github.com/helm/charts/pull/22004

jcmoraisjr commented 4 years ago

Hi, first of all thanks for evaluating haproxy ingress.

I need to know a bit more about ingress controllers and service mesh integrations in order to build a decent abstraction. In the mean time this can be properly documented and globally configured as configmap options. Note that: 1) service-uptream and any other backend scoped key can be used as configmap options, see configmap; 2) config-frontend can also be used and it's prefered as a global configuration, because a global config-backend would still configure the same snippet on every single haproxy backend instead of a single-line snippet in the frontend.

I don't support the helm chart myself and thanks about the PR! As an Istio user would you say our 5min deployment (daemonset) should also be updated?

irizzant commented 4 years ago

Hi @jcmoraisjr the option 1) works for service-upstream but putting the config-backend or config-frontend in the global configmap would be wrong. Kubernetes Ingress all work by selecting a backend Service through serviceName directive. Say that I deployed a Kuard plus a WordPress workload (just as an example). The 2 workloads would end up configured with 2 Ingress resources backed up by 2 Services (1 for each workload). The 2 Ingress objects would have:

So I can't just put one config-backend for the whole Kubernetes cluster I have to manually configure every Ingress resource, or can you suggest a different way to list all the needed configuration in a ConfigMap?

About the daemonset I'll try and let you know

jcmoraisjr commented 4 years ago

putting the config-backend or config-frontend in the global configmap would be wrong.

yeah, sure, missing the fact you'll have a couple of services haproxy ingress should know and send requests to.

What about a upstream-vhost config key which accepts vars? You'd globally configure something like %[service].%[namespace].svc.cluster.local which would render to wordpress.ns.svc.cluster.local or kuard.ns.svc.cluster.local depending on the backend.

irizzant commented 4 years ago

What about a upstream-vhost config key which accepts vars? You'd globally configure something like %[service].%[namespace].svc.cluster.local which would render to wordpress.ns.svc.cluster.local or kuard.ns.svc.cluster.local depending on the backend.

Could you please provide an example Configmap for this? I can't find references of this configuration in the chart doc. Moreover, how would you pass in the variables values?

jcmoraisjr commented 4 years ago

I can't find references of this configuration in the chart doc.

This is a draft, a proposal. Both upstream-vhost and %[var] render doesn't exist yet. Moreover the chart is a community contribution, I cannot maintain it myself. Always follow the haproxy ingress docs which is the source of truth.

Could you please provide an example Configmap for this?

Below is a possible configuration after implemented. Note that this is just an ordinary config key which behaves pretty much like the nginx ingress one. I"m just enabling var declaration on top of it.

    upstream-vhost: %[service].%[namespace].svc.cluster.local

Moreover, how would you pass in the variables values?

Internal state, the controller knows the values when rendering the final configuration.

irizzant commented 4 years ago

upstream-vhost: %[service].%[namespace].svc.cluster.local looks fine to me but still I don't get the way it should be used.

Say I have Istio in my cluster and I want to enable this directive cluster wise in the ConfigMap. I would expect to have something like: upstream-vhost: enabled

in the ConfigMap and it translates to: upstream-vhost: %[service].%[namespace].svc.cluster.local

in the actual haproxy config file. Is that what you meant? If yes it's fine for me.

Moreover, I tested the 5 minutes deployment and the DaemonSet version doesn't work with Istio because the manifest uses host networking:

    spec:
      hostNetwork: true

to make it work it should use Kubernetes networking and it should be exposed as a classic Service (whose port names have to be compatible with Istio requirements of course).

irizzant commented 4 years ago

Hi @jcmoraisjr anything on the above? Also I tried to deploy Kuard using your suggestion:

2) config-frontend can also be used and it's prefered as a global configuration, because a global config-backend would still configure the same snippet on every single haproxy backend instead of a single-line snippet in the frontend.

but this way Kuard pod becomes unreachable, beause the host header is not rewritten:

GET / HTTP/1.1
Host: kuard.k8sibm.gq
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.5
Cache-Control: max-age=0
Content-Length: 0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:75.0) Gecko/20100101 Firefox/75.0
X-B3-Parentspanid: 61750ffdff708c38
X-B3-Sampled: 1
X-B3-Spanid: 0c71447cfa03e9c9
X-B3-Traceid: 43cb11809e5ac83761750ffdff708c38
X-Envoy-Internal: true
X-Forwarded-For: 172.18.0.1
X-Forwarded-Proto: http
X-Request-Id: 589fa07e-b79e-9b9e-9b60-bbc28526f645

instead with config-backend:

GET / HTTP/1.1
Host: kuard-service.kuard.svc.cluster.local
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.5
Cache-Control: max-age=0
Content-Length: 0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:75.0) Gecko/20100101 Firefox/75.0
X-B3-Parentspanid: a79caa691de9558b
X-B3-Sampled: 1
X-B3-Spanid: d51d9b9813055ede
X-B3-Traceid: 6c2ffa4b16aa4fdea79caa691de9558b
X-Envoy-Internal: true
X-Forwarded-Client-Cert: By=spiffe://cluster.local/ns/kuard/sa/default;Hash=71b52f6a32916f4f0b49f670cfce2704a3e884203dab519ea2c43c9a298666b2;Subject="";URI=spiffe://cluster.local/ns/ingress/sa/ingress-haproxy-ingress
X-Forwarded-For: 172.18.0.1
X-Forwarded-Proto: http
X-Request-Id: 66a5b537-8cc6-96ae-b281-8ca04de0c689

See attached working cfg with config-backendcfg-working.txt and the one not working with config-frontend: cfg-not-working.txt

diff /tmp/cfg-working.txt /tmp/cfg-not-working.txt
42d41
<     http-request set-header host kuard-service.kuard.svc.cluster.local
jcmoraisjr commented 4 years ago

anything on the above?

Hi I started a draft of this configuration but could finish and prepare the PR yet due to another v0.11 development. I'll update you soon.

My understanding is that you can workaround with a config-backend snippet, right? You still need to configure per backend but this is the same limitation you'd have with nginx.

I tried to deploy Kuard using your suggestion ... but this way Kuard pod becomes unreachable, beause the host header is not rewritten:

The configuration wasn't in fact applied. I'd ask you how did you configure it but I think this isn't needed anymore since this isn't an option - you'll need to use the backend configuration. Is that right?

irizzant commented 4 years ago

Hi, thanks for the updates.

My understanding is that you can workaround with a config-backend snippet, right? You still need to configure per backend but this is the same limitation you'd have with nginx.

that's correct, config-backend works fine and it's the same limitation you have with Nginx.

The configuration wasn't in fact applied. I'd ask you how did you configure it but I think this isn't needed anymore since this isn't an option - you'll need to use the backend configuration. Is that right?

Correct, anyway for the sake of curiosity I deployed haproxy using the following ConfigMap in minikube:

  healthz-port: "10253"
  service-upstream: "true"
  stats-port: "1936"
  timeout-client: 30m
  timeout-server: 30m
  use-proxy-protocol: "false"

and the following Helm chart options:

defaultBackend:
 enabled: true
 replicaCount: 2
 podAnnotations:
   "traffic.sidecar.istio.io/includeInboundPorts": ""
controller:
 podAnnotations:
   "traffic.sidecar.istio.io/includeInboundPorts": ""
 config:
   use-proxy-protocol: "false"
   service-upstream: "true" 
   timeout-server: "30m"
   timeout-client: "30m"
 metrics:
   enabled: true
 replicaCount: 2
 podAffinity:
   podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 50
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: haproxy-ingress-controller
              component: controller
          topologyKey: failure-domain.beta.kubernetes.io/zone
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: haproxy-ingress-controller
              component: controller
          topologyKey: kubernetes.io/hostname

Then you can deploy Kuard with the following manifest:

cat << EOF | kubectl -n kuard apply -f-
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kuard-deployment
  labels:
    app: kuard
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kuard
  template:
    metadata:
      labels:
        app: kuard
    spec:
      containers:
        - image: gcr.io/kuar-demo/kuard-amd64:1
          name: kuard
          ports:
            - containerPort: 8080
              name: http
            - containerPort: 80
              name: http-2
---
apiVersion: v1
kind: Service
metadata:
  name: kuard-service
spec:
  selector:
    app: kuard
  ports:
  - name: http
    port: 8080
  - name: http-2
    port: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kuard
  namespace: kuard
  annotations:
    "ingress.kubernetes.io/config-frontend": "http-request set-header host kuard-service.kuard.svc.cluster.local"
spec:
  rules:
  - host: kuard.k8sibm.gq
    http:
      paths:
      - path: /
        backend:
          serviceName: kuard-service
          servicePort: http
EOF

This is enough to make the host header wrong: Host: kuard.k8sibm.gq. Revert to config-backend and you'll see the host is right.

jcmoraisjr commented 4 years ago

This is enough to make the host header wrong: Host: kuard.k8sibm.gq.

You're using config-frontend as an annotation but it's in fact a configmap option. New doc and also old single-page doc describing all configuration snippet options.

irizzant commented 4 years ago

From the new doc I can see there is an example as annotation:

Annotation:

annotations:
  ingress.kubernetes.io/config-backend: |
    acl bar-url path /bar
    http-request deny if bar-url

but the doc is not clear in respect to where each option is allowed to be put, since there is an annotation example one could thing that each option could be specified as such.

Anyway thank you for clarifying and from what you say we can exclude config-frontend as option since you still need to specify the FQDN of each Ingress resource.

jcmoraisjr commented 4 years ago

Hi, missing an update to this issue. v0.11 has now headers config key, doc here. Does this feature help here?

github-actions[bot] commented 4 years ago

This issue got stale and will be closed in 7 days.

irizzant commented 3 years ago

Hi @jcmoraisjr I tried the following haproxy ConfigMap:

defaultBackend:
 enabled: true
 replicaCount: 2
 podAnnotations:
   "traffic.sidecar.istio.io/includeInboundPorts": "" 
controller:
 podAnnotations:
   "traffic.sidecar.istio.io/includeInboundPorts": "" 
 config:
   use-proxy-protocol: "false"
   service-upstream: "true" 
   timeout-server: "30m"
   timeout-client: "30m"
 metrics:
   enabled: true
 replicaCount: 2
 podAffinity:
   podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 50
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: haproxy-ingress-controller
              component: controller
          topologyKey: failure-domain.beta.kubernetes.io/zone
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: haproxy-ingress-controller
              component: controller
          topologyKey: kubernetes.io/hostname

and Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  namespace: kuard
  annotations:
    "ingress.kubernetes.io/headers":
       host: %[service].%[namespace].svc.cluster.local
spec:
  ingressClassName: haproxy
  rules:
  - host: kuard.172.17.0.2.nip.io
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kuard-service
            port:
              number: 80

and I managed to connect to kuard, so the headers configuration helped indeed. Sorry for the late reply