nokia / kong-oidc

OIDC plugin for Kong
Apache License 2.0
455 stars 320 forks source link

add OIDC to Kong in K8 #183

Open yasir2000 opened 3 years ago

yasir2000 commented 3 years ago

How to add OIDC from source to running Kong pod in k8

luarocks install kong-oidc

ghunteranderson commented 3 years ago

Hey @yasir2000,

Just thought I'd share a couple resources I found helpful for this.

  1. The Kong documentation for "Installing the plugin" has an option for manually installing the plugin by source. It involves copying the source into the image and then adding the plugin location to the environment variable LUA_PATH. What it doesn't mention is installing dependencies required by the plugin.
  2. In this repo, the Dockerfile used for tests will copy the repo into the image, set the LUA_PATH variable and install the dependencies. A few of those dependencies are just for tests. You shouldn't need them.

When testing, I used a Dockerfile like this but you may want to tweak the copy command so only the required files are included.

ARG KONG_BASE_TAG
FROM kong${KONG_BASE_TAG}

USER root

ENV LUA_PATH ${LUA_PATH};/usr/local/kong-oidc/?.lua;;
RUN luarocks install lua-resty-openidc 1.6.0
COPY . /usr/local/kong-oidc

USER kong

I've only tested this in vanilla Docker but I don't suspect you'll have problems in K8s.

yasir2000 commented 3 years ago

Hey @yasir2000,

Just thought I'd share a couple resources I found helpful for this.

  1. The Kong documentation for "Installing the plugin" has an option for manually installing the plugin by source. It involves copying the source into the image and then adding the plugin location to the environment variable LUA_PATH. What it doesn't mention is installing dependencies required by the plugin.
  2. In this repo, the Dockerfile used for tests will copy the repo into the image, set the LUA_PATH variable and install the dependencies. A few of those dependencies are just for tests. You shouldn't need them.

When testing, I used a Dockerfile like this but you may want to tweak the copy command so only the required files are included.

ARG KONG_BASE_TAG
FROM kong${KONG_BASE_TAG}

USER root

ENV LUA_PATH ${LUA_PATH};/usr/local/kong-oidc/?.lua;;
RUN luarocks install lua-resty-openidc 1.6.0
COPY . /usr/local/kong-oidc

USER kong

I've only tested this in vanilla Docker but I don't suspect you'll have problems in K8s.

My question, is how to to follow with this deployment inside a running POD container? do I have to rerun Docker file with new commands or I can do this while container running, my plugins path is :

/opt/bitnami/kong/openresty/luajit/share/lua/5.1/kong/plugins

ghunteranderson commented 3 years ago

Hmm... that's a good question. If you're looking for a way to add the plugin without restarting Kong, I'm not entirely sure it's possible. Kong loads the plugins listed in KONG_PLUGINS on startup.

However, if you're ok with restarting the instance, you might be able to get that to work with just the regular Kong image. Here's 4 things you'd need to do.

  1. Add the source code to the container's file system. I'd recommend using some kind of volume or mount so the plugin's source code is persistent.
  2. Set environment variable LUA_PATH to include the plugin. Hopefully you can set this in your deployment configuration.
  3. Set environment variable KONG_PLUGINS to include oidc. For example KONG_PLUGINS=bundled,oidc. Hopefully you can do this in your deployment configuration as well.
  4. This one seems a little more tricky to me. You'd need to run luarocks install lua-resty-openidc 1.6.0 before the Kong process starts. Without adding a new startup script, I'm guessing your best bet will be to modify the startup command. However, the newer Kong images run with user kong instead of root but the LuaRocks installation requires root access. That's why you see me swapping the users up in the Dockerfile. There are probably some ways to get around this by setting the user to root in your deployment config and then using sudo -u kong ... to run the docker-entrypoint.sh as kong. I use Alpine which doesn't include sudo so that would be another installation that has to happen. Alternatively, there might be an option for mounting the lua-resty-openidc dependency as well but that's unchartered territory for me.

Hopefully there's something helpful here for you. Cheers 🙂

yasir2000 commented 3 years ago

Problem @ghunteranderson is that am using bitnami image and helm chartshttps://bitnami.com/stack/kong/helm which I had to redeploy to set user 0 or root. Now that I have new deployment with root access, i tried to install on pod directly but that didnt pull new plugins even with restart Kong so i managed to create a pv, now i have to create mount point, bind it via configmap to pod, then put source inside it and deploy the Dockerfile with kubectl -f, my image already have dependencies lua-resty,...etc

ghunteranderson commented 3 years ago

Thanks for sending the helm chart over @yasir2000. If you can give a little time to test out a few approaches with that helm chart, I'll try to get something working.

yasir2000 commented 3 years ago

Am now for 14 days on this task infact :) so the smoother and quicker things are to come to happy ending, the louder I shout of it, it's a nice tricky stuff. I tried so so many ways to inject this config:

So the next is to follow your steps above to get an image upped with the source in separate path (now its inside /opt/bitnami/kong/openresty/luajit/share/lua/5.1/kong/plugins and edited /opt/bitnami/kong/conf/kong.conf plugins section)

Will say HOORAY once its there in listed plugins.

yasir2000 commented 3 years ago

Now am in the middle way @ghunteranderson , I created , mounted and bound a pv with source inside it /mnt/data/kong-config/kong-oidc-master/ next is to do something in that pod to luarocks install lua-resty-openidc 1.6.0 but without rebuilding the image with new Dockerfile

yasir2000 commented 3 years ago

This is my existing deployment yaml:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: kong
  namespace: kong
  labels:
    app.kubernetes.io/component: server
    app.kubernetes.io/instance: kong
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kong
    helm.sh/chart: kong-3.0.1
  annotations:
    deployment.kubernetes.io/revision: '1'
    meta.helm.sh/release-name: kong
    meta.helm.sh/release-namespace: kong
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/component: server
      app.kubernetes.io/instance: kong
      app.kubernetes.io/name: kong
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: server
        app.kubernetes.io/instance: kong
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: kong
        helm.sh/chart: kong-3.0.1
      annotations:
        checksum/configmap-kong: 0e4f2471170bd86586fd5ec5ad2593bff656df939f49ec74239e032758fcea07
    spec:
      volumes:
        - name: health
          configMap:
            name: kong-scripts
            defaultMode: 493
      containers:
        - name: kong
          image: 'docker.io/bitnami/kong:2.2.0-debian-10-r33'
          ports:
            - name: http-proxy
              containerPort: 8000
              protocol: TCP
            - name: https-proxy
              containerPort: 8443
              protocol: TCP
            - name: http-admin
              containerPort: 8001
              protocol: TCP
            - name: https-admin
              containerPort: 8444
              protocol: TCP
          env:
            - name: KONG_ADMIN_LISTEN_ADDRESS
              value: 0.0.0.0
            - name: KONG_DATABASE
              value: postgres
            - name: KONG_PG_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: kong-postgresql
                  key: postgresql-password
            - name: KONG_PG_HOST
              value: kong-postgresql
            - name: KONG_PG_USER
              value: kong
          resources: {}
          volumeMounts:
            - name: health
              mountPath: /health
          livenessProbe:
            exec:
              command:
                - /bin/bash
                - '-ec'
                - /health/kong-container-health.sh
            initialDelaySeconds: 120
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
                - /bin/bash
                - '-ec'
                - /health/kong-container-health.sh
            initialDelaySeconds: 30
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 6
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - '-c'
                  - kong quit
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            runAsUser: 0
            runAsNonRoot: false
        - name: kong-ingress-controller
          image: 'docker.io/bitnami/kong-ingress-controller:0.10.0-debian-10-r68'
          command:
            - bash
            - '-ec'
            - /health/ingress-container-start.sh
          ports:
            - name: http-health
              containerPort: 10254
              protocol: TCP
          env:
            - name: CONTROLLER_KONG_ADMIN_URL
              value: 'http://127.0.0.1:8001'
            - name: CONTROLLER_PUBLISH_SERVICE
              value: kong/kong
            - name: CONTROLLER_INGRESS_CLASS
              value: kong
            - name: CONTROLLER_ELECTION_ID
              value: kong-ingress-controller-leader-kong
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
          resources: {}
          volumeMounts:
            - name: health
              mountPath: /health
          livenessProbe:
            httpGet:
              path: /healthz
              port: http-health
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 6
          readinessProbe:
            httpGet:
              path: /healthz
              port: http-health
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 6
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            runAsUser: 0
            runAsNonRoot: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: kong
      serviceAccount: kong
      securityContext: {}
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/component: server
                    app.kubernetes.io/instance: kong
                    app.kubernetes.io/name: kong
                namespaces:
                  - kong
                topologyKey: kubernetes.io/hostname
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
yasir2000 commented 3 years ago

I have a useful configMap image

Could that be useful to add a new script from the repo: https://github.com/nokia/kong-oidc/tree/master/ci

chance2021 commented 1 year ago

I just custimized the image based on the bitnami/kong image, and use it for my k8s kong. It works fine and I can add oidc plugin via Konga.

Custimized Docker File

FROM docker.io/bitnami/minideb:bullseye
ENV HOME="/" \
    OS_ARCH="amd64" \
    OS_FLAVOUR="debian-11" \
    OS_NAME="linux"

COPY prebuildfs /
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Install required system packages and dependencies
RUN install_packages acl ca-certificates curl gzip libc6 libcrypt1 libgcc-s1 libpcre3 libprotobuf-dev libssl1.1 libyaml-0-2 perl procps tar zlib1g zlib1g-dev
RUN . /opt/bitnami/scripts/libcomponent.sh && component_unpack "gosu" "1.14.0-151" --checksum 089bb11a3bc6031c5a91ab5f9534e9e7e41b928d10d72a3986f16bb61d3a9900
RUN . /opt/bitnami/scripts/libcomponent.sh && component_unpack "kong" "2.8.1-157" --checksum 7126cf210476261b1bb34568006637e9fea9106c8bdafd634e11de46e74563d2
RUN apt-get update && apt-get upgrade -y
RUN apt-get install unzip -y
RUN chmod g+rwX /opt/bitnami

COPY rootfs /
RUN /opt/bitnami/scripts/kong/postunpack.sh
ENV APP_VERSION="2.8.1" \
    BITNAMI_APP_NAME="kong" \
    PATH="/opt/bitnami/common/bin:/opt/bitnami/kong/bin:/opt/bitnami/kong/openresty/bin:/opt/bitnami/kong/openresty/luajit/bin:/opt/bitnami/kong/openresty/nginx/sbin:$PATH"

RUN luarocks install lua-resty-openidc
RUN luarocks install kong-oidc

RUN rm -r /var/lib/apt/lists /var/cache/apt/archives

EXPOSE 8000 8001 8443 8444

USER 1001
ENTRYPOINT [ "/opt/bitnami/scripts/kong/entrypoint.sh" ]
CMD [ "/opt/bitnami/scripts/kong/run.sh" ]

Helm Value (based on here)

# helm repo add bitnami https://charts.bitnami.com/bitnami
# helm -n utility install kong bitnami/kong --set postgresql.auth.password=<KongPassword> --set postgresql.auth.postgresPassword=<DBPassword> -f values.yaml
# Note: The password and postgresPassword can be found under gopass (pilot/azure/kong/postgres)
image:
  registry: ghcr.io
  repository: <YourPrivateGithubRepo>/kong-with-oidc 
  tag: latest
  pullPolicy: IfNotPresent
  debug: false
database: postgresql
replicaCount: 1
ingress:
  enabled: true
  hostname: api.dev.pilot.indocresearch.org
  path: /
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/whitelist-source-range: <VPN IP Address>
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/proxy-body-size: 20m
    nginx.ingress.kubernetes.io/proxy-buffer-size: 512k
    nginx.ingress.kubernetes.io/proxy-buffering: "on"
    nginx.ingress.kubernetes.io/proxy-buffers-number: "8"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: 180s
    nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: 180s
    nginx.ingress.kubernetes.io/proxy-send-timeout: 180s
  tls: true
ingressController:
  enabled: false
postgresql:
  enabled: true
  image:
    registry: docker.io
    repository: bitnami/postgresql
    tag: 11.16.0-debian-11-r5
  auth:
    username: kong
    password: "<KongPassword>"
    database: kong
    postgresPassword: ""
    existingSecret: ""
    usePasswordFiles: false
  architecture: standalone
kong:
  extraEnvVars:
  - name: KONG_LOG_LEVEL
    value: "debug"
  - name: KONG_PLUGINS
    value: "bundled,oidc"
MalikEljaouadi commented 1 year ago

@chance2021 That worked for me ! Thanks

luizcolacio commented 1 year ago

@MalikEljaouadi how did you executed the docker build ? mine is not finding the prebuildfs and rootfs folders :/

MalikEljaouadi commented 1 year ago

@MalikEljaouadi how did you executed the docker build ? mine is not finding the prebuildfs and rootfs folders :/

Yes I executed the docker build ! but first you need to pull the repo of Bitnami Kong (https://github.com/bitnami/bitnami-docker-kong) and override its Dockerfile with this image. and if you are stuck with it you can use the image that Ihave provided in the Dockerhub (https://hub.docker.com/repository/docker/malekeljaouadi/bitnami-kong-with-oidc) and it is working fine!

Zaryab123 commented 1 year ago

@MalikEljaouadi how did you executed the docker build ? mine is not finding the prebuildfs and rootfs folders :/

Yes I executed the docker build ! but first you need to pull the repo of Bitnami Kong (https://github.com/bitnami/bitnami-docker-kong) and override its Dockerfile with this image. and if you are stuck with it you can use the image that Ihave provided in the Dockerhub (https://hub.docker.com/repository/docker/malekeljaouadi/bitnami-kong-with-oidc) and it is working fine!

Hi @MalikEljaouadi, i tried to create a helm build based on your provided image i.e malekeljaouadi/bitnami-kong-with-oidc and also added the kong-oidc plugin as extraEnvVars according to your above provided Values.yaml file. I am still not able to see OIDC plugin in konga.

Here are my custom Values.yaml configs:

image:
  repository: malekeljaouadi/bitnami-kong-with-oidc
  tag: "latest"
  pullPolicy: IfNotPresent

admin:
  enabled: true
  type: LoadBalancer
  annotations: {}
  labels: {}

  http:
    enabled: true
    servicePort: 8001
    containerPort: 8001
    parameters: []

  tls:
    enabled: false
    servicePort: 8444
    containerPort: 8444
    parameters:
      - http2

  ingress:
    enabled: true
    ingressClassName: kong
    hostname: admin.kongproxy.me
    annotations:
       external-dns.alpha.kubernetes.io/hostname: admin.kongproxy.me
    path: /

image:
 repository: kong/kong-gateway
 tag: "2.7"
 pullPolicy: IfNotPresent

env:
  database: "postgres"
  pg_user: "kong"
  pg_password: "kong"
  pg_database: "kong"
  ph_host: "kong-postgresql"
  admin_api_uri: "http://admin.kongproxy.me"
  admin_gui_url: "http://manager.kongproxy.me"

manager:
  enabled: true
  type: LoadBalancer
  annotations: {}
  labels: {}

  http:
    enabled: true
    servicePort: 8002
    containerPort: 8002
    parameters: []

  tls:
    enabled: false
    servicePort: 8445
    containerPort: 8445
    parameters:
      - http2

  ingress:
    enabled: true
    ingressClassName: kong
    hostname: manager.kongproxy.me
    annotations:
       external-dns.alpha.kubernetes.io/hostname: manager.kongproxy.me
    path: /

enterprise:
  enabled: true
  vitals:
    enabled: true

postgresql:
  enabled: true
  image:
    registry: docker.io
    repository: bitnami/postgresql
    tag: 11.16.0-debian-11-r5
  auth:
    username: kong
    password: kong
    database: kong
    postgresPassword: test123
    existingSecret: ""
    usePasswordFiles: false
  architecture: standalone
kong:
  extraEnvVars:
  - name: KONG_LOG_LEVEL
    value: "debug"
  - name: KONG_PLUGINS
    value: "bundled,oidc"

Can you please look into the config and mention anything that i am doing wrong? I want to integrate kong with keycloak as ID provider on custom app. Hoping for a reply soon

Zaryab123 commented 1 year ago

@MalikEljaouadi how did you executed the docker build ? mine is not finding the prebuildfs and rootfs folders :/

Yes I executed the docker build ! but first you need to pull the repo of Bitnami Kong (https://github.com/bitnami/bitnami-docker-kong) and override its Dockerfile with this image. and if you are stuck with it you can use the image that Ihave provided in the Dockerhub (https://hub.docker.com/repository/docker/malekeljaouadi/bitnami-kong-with-oidc) and it is working fine!

Hi @MalikEljaouadi, i tried to create a helm build based on your provided image i.e malekeljaouadi/bitnami-kong-with-oidc and also added the kong-oidc plugin as extraEnvVars according to your above provided Values.yaml file. I am still not able to see OIDC plugin in konga.

Here are my custom Values.yaml configs:

image:
  repository: malekeljaouadi/bitnami-kong-with-oidc
  tag: "latest"
  pullPolicy: IfNotPresent

admin:
  enabled: true
  type: LoadBalancer
  annotations: {}
  labels: {}

  http:
    enabled: true
    servicePort: 8001
    containerPort: 8001
    parameters: []

  tls:
    enabled: false
    servicePort: 8444
    containerPort: 8444
    parameters:
      - http2

  ingress:
    enabled: true
    ingressClassName: kong
    hostname: admin.kongproxy.me
    annotations:
       external-dns.alpha.kubernetes.io/hostname: admin.kongproxy.me
    path: /

image:
 repository: kong/kong-gateway
 tag: "2.7"
 pullPolicy: IfNotPresent

env:
  database: "postgres"
  pg_user: "kong"
  pg_password: "kong"
  pg_database: "kong"
  ph_host: "kong-postgresql"
  admin_api_uri: "http://admin.kongproxy.me"
  admin_gui_url: "http://manager.kongproxy.me"

manager:
  enabled: true
  type: LoadBalancer
  annotations: {}
  labels: {}

  http:
    enabled: true
    servicePort: 8002
    containerPort: 8002
    parameters: []

  tls:
    enabled: false
    servicePort: 8445
    containerPort: 8445
    parameters:
      - http2

  ingress:
    enabled: true
    ingressClassName: kong
    hostname: manager.kongproxy.me
    annotations:
       external-dns.alpha.kubernetes.io/hostname: manager.kongproxy.me
    path: /

enterprise:
  enabled: true
  vitals:
    enabled: true

postgresql:
  enabled: true
  image:
    registry: docker.io
    repository: bitnami/postgresql
    tag: 11.16.0-debian-11-r5
  auth:
    username: kong
    password: kong
    database: kong
    postgresPassword: test123
    existingSecret: ""
    usePasswordFiles: false
  architecture: standalone
kong:
  extraEnvVars:
  - name: KONG_LOG_LEVEL
    value: "debug"
  - name: KONG_PLUGINS
    value: "bundled,oidc"

Can you please look into the config and mention anything that i am doing wrong? I want to integrate kong with keycloak as ID provider on custom app. Hoping for a reply soon

@chance2021 can you please look into it as well, btw i am not using bitnami/kong but kong/kong helm chart image, would it make any difference?