espresso-lab / oidc-forward-auth-middleware

MIT License
8 stars 1 forks source link

Change helm values oidcProviders from list to dict #6

Open Joker9944 opened 2 weeks ago

Joker9944 commented 2 weeks ago

Please consider switching the list usage from:

oidcProviders:
  - ingressHostname: example.com
    ...
  - ingressHostname: example2.com
    ...

to dict usage like this:

oidcProviders:
  example:
    ingressHostname: example.com
    ...
  example2:
    ingressHostname: example.com
    ...

Now why would you implement change like this? Wouldn't the key be redundant?

Helm does not support merging of lists from multiple values files. But it can merge multiple dicts from different values files.

Why is this desirable?

Two reasons:

values-plain

oidcProviders:
  example:
    ingressHostname: example.com
    issuerUrl: https://id.example.com/oauth/app1
    clientId: app1
    scopes: ["email", "profile"]
    audience: ["app1"]

values-encrypted

oidcProviders:
  example:
    clientSecret: mysecretpassword
henobi commented 2 weeks ago

Hello @Joker9944, thank you for your enhancement request. We are very interested in improving the code and I would like to understand your use case better beforehand.

Do I understand correctly that you are using two values-files: one for the general configuration and a separate one for sensitive information?

If the use case is mainly to deal with sensitive information in values-files, I would recommend the approach to store the sensitive oidc data in a k8s secret and connect it to the middleware with the existingSecret field.

Would that solve your problem already?

If not, could you please share more details about the use case?

Best, hendrik

Joker9944 commented 2 weeks ago

Hi @henobi

You are completely right about the existingSecret field. This makes the second use case invalid.

So let me elaborate on the first one.

I have multiple apps that need ForwardAuth with OIDC done. Each app has it's own OIDC provider. I'm doing GitOps and have a repo layout like this:

apps
  app-1
    helm-release.yaml
  app-2
    helm-release.yaml
  app-3
    helm-release.yaml

infrastructure
  ingress-system
    traefik
      helm-release.yaml
    oidc-forward-auth-middleware
      helm-release.yaml

Currently I would need to define each OIDC provider in the helm-release.yaml of oidc-forward-auth-middleware. This means that the app config would be split into multiple locations which I would like to avoid.

If the oidcProviders: value would use a dict I could split the values file and move the OIDC provider config into the app dir of the corresponding app and refrence those values in the helm-release.yaml of oidc-forward-auth-middleware.

This could look like this:

apps
  app-1
    helm-release.yaml
    oidc-values.yaml
  app-2
    helm-release.yaml
    oidc-values.yaml
  app-3
    helm-release.yaml
    oidc-values.yaml

infrastructure
  ingress-system
    traefik
      helm-release.yaml
    oidc-forward-auth-middleware
      helm-release.yaml
henobi commented 2 weeks ago

Hi @Joker9944

Ok understood. I agree that would make more sense to have the oidc config sitting next to the app.

Maybe in that case it would be even more convenient to set the oidc config also as an ingress annotation. What do think about a solution like that?

ingress:
  enabled: true
  hosts:
    - app.example.com
  ingressClassName: traefik
  annotations:
    traefik.ingress.kubernetes.io/router.middlewares: kube-system-oidc-forward-auth-middleware@kubernetescrd
    oidc.ingress.kubernetes.io/issuer-url: https://id.example.com/oauth/app1
    oidc.ingress.kubernetes.io/client-id: example
    oidc.ingress.kubernetes.io/client-secret: mysecretpassword
    oidc.ingress.kubernetes.io/scopes: email,profile
    oidc.ingress.kubernetes.io/audience: app1
    oidc.ingress.kubernetes.io/existing-secret: app1

But I have no clue how much effort that needs to get it implemented.

Joker9944 commented 2 weeks ago

I wanted to propose this idea but thought that might be a big ask. A feature like this would make it a strong contender to oauth2-proxy in the Kubernetes space.

I have a fairly decent idea on how getting metadata from Kubernetes resources work so I can draw up the process on how this can be done.

Sadly I have no experience in Rust.

If you like an example now on how this could work here is a comparable implementation in Go: https://github.com/ori-edge/k8s_gateway/blob/master/kubernetes.go

henobi commented 2 weeks ago

Hi @Joker9944,

sounds great! Then let's further explore how to implement that.

If you could draw up the process that would be great!

I found a library for rust to fetch the kubernetes metadata (https://kube.rs/) but I am not sure yet how the process works exactly.

Joker9944 commented 2 weeks ago

To get cluster resource metadata involves handling access, authorization and the API calls to the kube-api server.

Access

This one is quite easy. Access is handled trough cluster internal DNS resolution. You can take a peek at the resolv.conf of any container running in the cluster. Here is an example:

search media-apps.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5

The kube-api server is always at kubernetes.default.svc which you can check. There is a kubernetes service in the default namespace.

apiVersion: v1
kind: Service
metadata:
  labels:
    component: apiserver
    provider: kubernetes
  name: kubernetes
  namespace: default
spec:
  clusterIP: 10.96.0.1
  clusterIPs:
    - 10.96.0.1
  internalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  ports:
    - name: https
      port: 443
      protocol: TCP
      targetPort: 6443
  sessionAffinity: None
  type: ClusterIP

But I guess a good library would already handle this.

Authentication

Here it gets more tricky there are multiple possible ways to authenticate with the kube-api server. But the one used for this kind of interaction would be role-based access control (RBAC).

For this you need a Role or ClusterRole, a RoleBinding or ClusterRoleBinding and a ServiceAccount assigned to the pod where you want to access the kube-api server from.

Role or ClusterRole

The Role is the definition on what kind of resources you'd like access to. The difference between Role and ClusterRole is namespacing. Role is namespaced and allows access to resources in the same namespace. ClusterRole is not namespaced so it allows access to all resources in the cluster. So in this case a ClusterRole should be used.

In the Role one must define a list of apiGroups, resources (kind) of the apiGroups and verbs which govern what actions you'd like to take.

Here is an example which allows list and watch access to Ingress resources metadata and read access to secrets for OIDC client secret extraction:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: example
rules:
- apiGroups: [ "" ]
  resources: [ secrets ]
  verbs: [ get ]
- apiGroups: [ networking.k8s.io ]
  resources: [ ingresses ]
  verbs: [ list, watch ]

Possible apiGroup, resource and verb combinations can be checked with kubectl api-resources -o wide.

Traefik Ingress CRDs

Traefik has it's own set of ingress CRDs that since this is a Traefik focused project they may be included too. This could look like this:

- apiGroups: [ traefik.io ]
  resources: [ ingressroutes ]
  verbs: [ list, watch ]

Concerns

Read access to all secrets in all namespaces can be a bit spicy so I'm not quite sure if that is a good idea myself. I will read up a bit more about this.

ServiceAccount

To authenticate the workload which needs access to the kube-api server a ServiceAccount is needed. The ServiceAccount is it's own resource and is assigned to a workload. The ServiceAccount governs the credentials used but this all abstracted away by Kubernetes so no real magic here. Here is an example:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: example
  namespace: example

Assigning

As easy as dropping the name of the ServiceAccount into the serviceAccountName into the pod spec of the workload.

Concerns

Since the ClusterRole has access to secrets this is a place where some restrictions can be put into place. I will follow up on this.

RoleBinding or ClusterRoleBinding

Now that both the autherization and the authentication are defined all that is left is to glue them together. Similar to the Role the RoleBinding is namespace and the ClusterRoleBinding is not. So again a ClusterRoleBinding should be used.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: example
subjects: # This references the ServiceAccount defined earlier.
- kind: ServiceAccount
  name: example
  namespace: example
roleRef: # This references the ClusterRole defined earlier.
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: example

kube-api

Access is now handled so all that is left is how to talk to the kube-api from the workload.

Authentication

The ServiceAccount credentials are automatically mounted in all containers belonging to the workload. The credentials are stored at:

Consisting of the ca used to sign the TLS cert and a Bearer Token.

Api

For the actual api calls it's probably best to use a lib but here are some references to calls that will have to be made.

For listing or watching the Traefik CRDs I actually have no idea. But could look like this:

GET /apis/traefik.io/v1alpha1/ingressroutes

Joker9944 commented 2 weeks ago

Hi @henobi

I took some time and wrote down everything I know. I hope this helps evaluate the effort needed. If there are any open questions feel free to ask.

henobi commented 2 weeks ago

Thank you very much @Joker9944 !!

henobi commented 2 weeks ago

I just pushed tag v3.0.0-alpha.2.

As soon as that works, adding the ingress data to the oidc provider list is really straight forward. The watcher might be still a bit tricky. Let's see.

henobi commented 2 weeks ago

Hi @Joker9944,

I created a first working draft. The coding still needs some improvements but so far it works in my environment 👍

Example app ingress annotations:

ingress:
   annotations:
      oidc.ingress.kubernetes.io/forward-auth-enabled: "'true'"
      oidc.ingress.kubernetes.io/issuer-url: https://example.com/oauth2/openid/whoami
      oidc.ingress.kubernetes.io/audience: whoami
      oidc.ingress.kubernetes.io/scopes: profile,openid,email
      oidc.ingress.kubernetes.io/existing-secret: whoami-oidc-secret # Fields: client-id, client-secret

      # OR:
      # oidc.ingress.kubernetes.io/client-id: ...
      # oidc.ingress.kubernetes.io/client-secret: ...

To test it you also need the alpha version of the helm chart with the main version of the container:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: oidc-forward-auth-middleware
  namespace: argocd
spec:
  project: default
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
  source:
    chart: oidc-forward-auth-middleware
    repoURL: ghcr.io/espresso-lab/helm-charts
    targetRevision: "*-0"
    helm:
      values: |
        image:
          tag: main
          pullPolicy: Always

Does it also work for you?

Joker9944 commented 1 week ago

I will give it a try.

Joker9944 commented 1 week ago

Hi @henobi

I got it to show up!

2024-09-01T12:31:44.099954Z  INFO oidc_forward_auth_middleware::oidc_providers: Starting to initialize OIDC providers.
2024-09-01T12:31:44.100033Z  WARN oidc_forward_auth_middleware::oidc_providers: No OIDC providers initialized. Please check environment variables.
2024-09-01T12:31:44.100050Z  INFO oidc_forward_auth_middleware::oidc_providers: Running k8s ingres discovery
2024-09-01T12:31:44.131712Z  INFO oidc_forward_auth_middleware::oidc_providers: ---
2024-09-01T12:31:44.131775Z  INFO oidc_forward_auth_middleware::oidc_providers: K8s Ingress: prowlarr in namespace media-apps
2024-09-01T12:31:44.131791Z  INFO oidc_forward_auth_middleware::oidc_providers: ---

Just as an FYI in 3.0.0-alpha.2 the lookup is made on the oidc.ingress.kubernetes.io/oidc-forward-auth-enable annotation instead of oidc.ingress.kubernetes.io/forward-auth-enabled but that seems to be already adjusted in main.

henobi commented 1 week ago

Thank you for testing. Great to hear the POC is working.

I will optimize and stabilize the code base the next days and then create a new release version.

In case you have any further ideas feel free to share with us :)