Kong / kubernetes-ingress-controller

:gorilla: Kong for Kubernetes: The official Ingress Controller for Kubernetes.
https://docs.konghq.com/kubernetes-ingress-controller/
Apache License 2.0
2.2k stars 590 forks source link

Kong Ingress Controller NLB does not work with Preserving Client IP Address #1135

Closed vothanhbinhlt closed 3 years ago

vothanhbinhlt commented 3 years ago

Summary

SUMMARY_GOES_HERE

Kong Ingress Controller Helm version: 1.15.0

Kong or Kong Enterprise version

Kubernetes version

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Environment

What happened

Hello everyone! I am using EKS and Kong Ingress Controller, and Enabled Preserving Client IP Address. I have follow this Article https://docs.konghq.com/kubernetes-ingress-controller/1.1.x/guides/preserve-client-ip/.

This is my values.yaml file:

autoscaling:
  enabled: "true"
env:
  database: postgres
  pg_database: kong
  pg_host: kong-database.devpanel.svc.cluster.local
  pg_password: xxxx
  pg_user: kong
  prefix: /kong_prefix/
  proxy_listen: 0.0.0.0:8000 proxy_protocol, 0.0.0.0:8443 ssl proxy_protocol
  real_ip_header: proxy_protocol
  trusted_ips: 0.0.0.0/0,::/0
ingressController:
  enabled: "true"
  installCRDs: "false"
  resources:
    requests:
      cpu: 100m
      memory: 256Mi
nodeSelector:
  groupType: on-demand
proxy:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"

This is pod created into the EKS cluster:

Name:         kong-controlle-kong-7996ccf966-5knk5
Namespace:    devpanel
Priority:     0
Node:         ip-10-0-14-46.us-west-2.compute.internal/10.0.14.46
Start Time:   Fri, 26 Mar 2021 08:55:10 +0000
Labels:       app.kubernetes.io/component=app
              app.kubernetes.io/instance=kong-controlle
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=kong
              app.kubernetes.io/version=2.3
              helm.sh/chart=kong-1.15.0
              pod-template-hash=7996ccf966
Annotations:  kubernetes.io/psp: eks.privileged
Status:       Running
IP:           10.0.8.214
IPs:
  IP:           10.0.8.214
Controlled By:  ReplicaSet/kong-controlle-kong-7996ccf966
Init Containers:
  wait-for-db:
    Container ID:  docker://91f6006bbcd14e7f45220aef87d9e785dbeb81bae38f09bea0ef7b0dbcb6fee2
    Image:         kong:2.3
    Image ID:      docker-pullable://kong@sha256:b6df904a47c82dd0701dc13f65b6266908cbeb3bbeec8e0579cfbcc6fd4e791e
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      until kong start; do echo 'waiting for db'; sleep 1; done; kong stop; rm -fv '/kong_prefix//stream_rpc.sock'
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 26 Mar 2021 08:55:11 +0000
      Finished:     Fri, 26 Mar 2021 08:55:12 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      KONG_ADMIN_ACCESS_LOG:        /dev/stdout
      KONG_ADMIN_ERROR_LOG:         /dev/stderr
      KONG_ADMIN_GUI_ACCESS_LOG:    /dev/stdout
      KONG_ADMIN_GUI_ERROR_LOG:     /dev/stderr
      KONG_ADMIN_LISTEN:            127.0.0.1:8444 http2 ssl
      KONG_CLUSTER_LISTEN:          off
      KONG_DATABASE:                postgres
      KONG_KIC:                     on
      KONG_LUA_PACKAGE_PATH:        /opt/?.lua;/opt/?/init.lua;;
      KONG_NGINX_WORKER_PROCESSES:  2
      KONG_PG_DATABASE:             kong
      KONG_PG_HOST:                 kong-database.devpanel.svc.cluster.local
      KONG_PG_PASSWORD:             xxxx
      KONG_PG_USER:                 kong
      KONG_PLUGINS:                 bundled
      KONG_PORTAL_API_ACCESS_LOG:   /dev/stdout
      KONG_PORTAL_API_ERROR_LOG:    /dev/stderr
      KONG_PORT_MAPS:               80:8000, 443:8443
      KONG_PREFIX:                  /kong_prefix/
      KONG_PROXY_ACCESS_LOG:        /dev/stdout
      KONG_PROXY_ERROR_LOG:         /dev/stderr
      KONG_PROXY_LISTEN:            0.0.0.0:8000 proxy_protocol, 0.0.0.0:8443 ssl proxy_protocol
      KONG_REAL_IP_HEADER:          proxy_protocol
      KONG_STATUS_LISTEN:           0.0.0.0:8100
      KONG_STREAM_LISTEN:           off
      KONG_TRUSTED_IPS:             0.0.0.0/0,::/0
    Mounts:
      /kong_prefix/ from kong-controlle-kong-prefix-dir (rw)
      /tmp from kong-controlle-kong-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kong-controlle-kong-token-b9p7g (ro)
Containers:
  ingress-controller:
    Container ID:  docker://444c550f798c33909a5ae8239e84acfe3493e167c47c93959118390b056df145
    Image:         kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:1.1
    Image ID:      docker-pullable://kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller@sha256:4a4a03c9628b9cf499b85cc34dc35ea832ba0f801b9462fe73b6a8d294a07cf0
    Port:          <none>
    Host Port:     <none>
    Args:
      /kong-ingress-controller
    State:          Running
      Started:      Fri, 26 Mar 2021 08:55:13 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   256Mi
    Liveness:   http-get http://:10254/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness:  http-get http://:10254/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:                               kong-controlle-kong-7996ccf966-5knk5 (v1:metadata.name)
      POD_NAMESPACE:                          devpanel (v1:metadata.namespace)
      CONTROLLER_ELECTION_ID:                 kong-ingress-controller-leader-kong
      CONTROLLER_INGRESS_CLASS:               kong
      CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY:  true
      CONTROLLER_KONG_ADMIN_URL:              https://localhost:8444
      CONTROLLER_PUBLISH_SERVICE:             devpanel/kong-controlle-kong-proxy
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-controlle-kong-token-b9p7g (ro)
  proxy:
    Container ID:   docker://909b26117247e4446bfac4fef14bb1a0bc6baaba7f09a785fd9e757c9aad398e
    Image:          kong:2.3
    Image ID:       docker-pullable://kong@sha256:b6df904a47c82dd0701dc13f65b6266908cbeb3bbeec8e0579cfbcc6fd4e791e
    Ports:          8000/TCP, 8443/TCP, 8100/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP
    State:          Running
      Started:      Fri, 26 Mar 2021 08:55:14 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:status/status delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness:      http-get http://:status/status delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
      KONG_ADMIN_ACCESS_LOG:        /dev/stdout
      KONG_ADMIN_ERROR_LOG:         /dev/stderr
      KONG_ADMIN_GUI_ACCESS_LOG:    /dev/stdout
      KONG_ADMIN_GUI_ERROR_LOG:     /dev/stderr
      KONG_ADMIN_LISTEN:            127.0.0.1:8444 http2 ssl
      KONG_CLUSTER_LISTEN:          off
      KONG_DATABASE:                postgres
      KONG_KIC:                     on
      KONG_LUA_PACKAGE_PATH:        /opt/?.lua;/opt/?/init.lua;;
      KONG_NGINX_WORKER_PROCESSES:  2
      KONG_PG_DATABASE:             kong
      KONG_PG_HOST:                 kong-database.devpanel.svc.cluster.local
      KONG_PG_PASSWORD:             xxxx
      KONG_PG_USER:                 kong
      KONG_PLUGINS:                 bundled
      KONG_PORTAL_API_ACCESS_LOG:   /dev/stdout
      KONG_PORTAL_API_ERROR_LOG:    /dev/stderr
      KONG_PORT_MAPS:               80:8000, 443:8443
      KONG_PREFIX:                  /kong_prefix/
      KONG_PROXY_ACCESS_LOG:        /dev/stdout
      KONG_PROXY_ERROR_LOG:         /dev/stderr
      KONG_PROXY_LISTEN:            0.0.0.0:8000 proxy_protocol, 0.0.0.0:8443 ssl proxy_protocol
      KONG_REAL_IP_HEADER:          proxy_protocol
      KONG_STATUS_LISTEN:           0.0.0.0:8100
      KONG_STREAM_LISTEN:           off
      KONG_TRUSTED_IPS:             0.0.0.0/0,::/0
      KONG_NGINX_DAEMON:            off

But I got this error when I watched the proxy container:

��/5�" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:03 [error] 23#0: *12902 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:03 [error] 23#0: *12906 broken header: "" while reading PROXY protocol, client: 10.0.14.46, server: 0.0.0.0:8443
2021/03/26 09:12:05 [error] 23#0: *12921 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:07 [error] 23#0: *12941 broken header: "" while reading PROXY protocol, client: 10.0.3.78, server: 0.0.0.0:8443
2021/03/26 09:12:07 [error] 23#0: *12942 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:07 [error] 23#0: *12946 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:09 [error] 23#0: *12952 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:09 [error] 23#0: *12953 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:09 [error] 23#0: *12958 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:09 [error] 23#0: *12959 broken header: "" while reading PROXY protocol, client: 10.0.3.78, server: 0.0.0.0:8443
2021/03/26 09:12:10 [error] 23#0: *12967 broken header: "" while reading PROXY protocol, client: 10.0.3.78, server: 0.0.0.0:8443
2021/03/26 09:12:10 [error] 23#0: *12970 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:12 [error] 23#0: *12991 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:12 [error] 23#0: *12992 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:13 [error] 23#0: *12998 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:13 [error] 23#0: *12999 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:13 [error] 23#0: *13000 broken header: "" while reading PROXY protocol, client: 10.0.3.78, server: 0.0.0.0:8443
2021/03/26 09:12:14 [error] 23#0: *13005 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:17 [error] 23#0: *13040 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:18 [error] 23#0: *13055 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:18 [error] 23#0: *13057 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:18 [error] 23#0: *13060 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:18 [error] 23#0: *13061 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:19 [error] 23#0: *13066 broken header: "" while reading PROXY protocol, client: 10.0.14.46, server: 0.0.0.0:8443
2021/03/26 09:12:23 [error] 23#0: *13109 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:24 [error] 23#0: *13114 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:25 [error] 23#0: *13128 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:27 [error] 23#0: *13144 broken header: "" while reading PROXY protocol, client: 10.0.9.13, server: 0.0.0.0:8443
2021/03/26 09:12:27 [error] 23#0: *13147 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:28 [error] 23#0: *13153 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:28 [error] 23#0: *13154 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:29 [error] 23#0: *13161 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:30 [error] 23#0: *13174 broken header: "" while reading PROXY protocol, client: 10.0.14.46, server: 0.0.0.0:8443
2021/03/26 09:12:32 [error] 23#0: *13194 broken header: "" while reading PROXY protocol, client: 10.0.6.46, server: 0.0.0.0:8443
2021/03/26 09:12:32 [error] 23#0: *13198 broken header: "" while reading PROXY protocol, client: 10.0.14.46, server: 0.0.0.0:8443
2021/03/26 09:12:33 [error] 23#0: *13201 broken header: "" while reading PROXY protocol, client: 10.0.1.183, server: 0.0.0.0:8443
2021/03/26 09:12:33 [error] 23#0: *13203 broken header: "��Q@xޤ=�
A����V�MmQi/`�k��V��� (4'�,BL|�:]1��qۡ0��@ev�J)���y0 �/�0�+�,̨̩��  ��

Expected behavior

Can you tell me where I did the wrong configuration?

vothanhbinhlt commented 3 years ago

up

vothanhbinhlt commented 3 years ago

I fixed the problem. Should change

...
env:
  trusted_ips: ${vpc_cidr}
vothanhbinhlt commented 3 years ago

Sorry, I would like to confirm again. The problem does not solve. I created a new EKS cluster and use the same configuration above including trusted_ips: ${vpc_cidr}, but It does not work

framled commented 3 years ago

@vothanhbinhlt Could you share your service manifest?

service.spec.externalTrafficPolicy must be set to Local

https://docs.konghq.com/kubernetes-ingress-controller/1.1.x/guides/preserve-client-ip/#externaltrafficpolicy-local

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.