Kong / kubernetes-ingress-controller

:gorilla: Kong for Kubernetes: The official Ingress Controller for Kubernetes.
https://docs.konghq.com/kubernetes-ingress-controller/
Apache License 2.0
2.22k stars 592 forks source link

Unable to change Port and expose it on hostPort #1047

Closed Seljuke closed 3 years ago

Seljuke commented 3 years ago

Summary

Trying to setup Kong as DaemonSet and publish changed ports with hostPort on every worker.

Kong Ingress controller version 1.1

Kong or Kong Enterprise version 2.2

Kubernetes version

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

Environment

What happened

Try to convert kong dbless all in one manifest from deployment to daemonset and publish changed proxy ports through hostPort. Pods stuck in CrashLoopBackOff state

Expected behavior

Running pods

Steps To Reproduce

Here is changed parts of manifest file;

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  name: kong-proxy
  namespace: kong
spec:
  ports:
  - name: proxy
    port: 80
    protocol: TCP
    targetPort: 80
  - name: proxy-ssl
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    app: ingress-kong
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  name: kong-validation-webhook
  namespace: kong
spec:
  ports:
  - name: webhook
    port: 443
    protocol: TCP
    targetPort: 8080
  selector:
    app: ingress-kong
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: ingress-kong
  name: ingress-kong
  namespace: kong
spec:
  selector:
    matchLabels:
      app: ingress-kong
  template:
    metadata:
      annotations:
        kuma.io/gateway: enabled
        prometheus.io/port: "8100"
        prometheus.io/scrape: "true"
        traffic.sidecar.istio.io/includeInboundPorts: ""
      labels:
        app: ingress-kong
    spec:
      containers:
      - env:
        - name: KONG_PROXY_LISTEN
          value: 0.0.0.0:80, 0.0.0.0:443 ssl http2
        - name: KONG_ADMIN_LISTEN
          value: 127.0.0.1:8444 ssl
        - name: KONG_STATUS_LISTEN
          value: 0.0.0.0:8100
        - name: KONG_DATABASE
          value: "off"
        - name: KONG_NGINX_WORKER_PROCESSES
          value: "2"
        - name: KONG_ADMIN_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_ADMIN_ERROR_LOG
          value: /dev/stderr
        - name: KONG_PROXY_ERROR_LOG
          value: /dev/stderr
        image: kong:2.2
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - kong quit
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: 8100
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: proxy
        ports:
        - containerPort: 80
          hostPort: 80
          name: proxy
          protocol: TCP
        - containerPort: 443
          hostPort: 443
          name: proxy-ssl
          protocol: TCP
        - containerPort: 8100
          name: metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /status
            port: 8100
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
      - env:
        - name: CONTROLLER_KONG_ADMIN_URL
          value: https://127.0.0.1:8444
        - name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
          value: "true"
        - name: CONTROLLER_PUBLISH_SERVICE
          value: kong/kong-proxy
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:1.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: ingress-controller
        ports:
        - containerPort: 8080
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
      serviceAccountName: kong-serviceaccount

When I applied manifest, ingress-controller containers crashed with error below;

time="2021-02-24T13:40:10Z" level=info msg="retry 1 to fetch metadata from kong: making HTTP request: Get \"https://127.0.0.1:8444/\": dial tcp 127.0.0.1:8444: connect: connection refused"

Just converting manifest to daemonset and publishing 8000, 8443 ports with hostPort works fine.

rainest commented 3 years ago

What do your Kong proxy container logs show?

Best guess is that it's not able to bind to the privileged ports 80 and 443, which prevents it from starting at all and binding to the admin port either.

You likely need to add a PodSecurityPolicy with a HostPortRange that allows them, and may need to configure other settings at the cluster level to allow this.

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.