cetic / helm-nifi

Helm Chart for Apache Nifi
Apache License 2.0
215 stars 225 forks source link

Unable to host with OAuth and if https enabled crashing loop back occurs #18

Closed mvesi-effilab closed 3 years ago

mvesi-effilab commented 4 years ago

Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like A clear and concise description of what you want to happen.

Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

Additional context Add any other context or screenshots about the feature request here.

alexnuttinck commented 4 years ago

@mvesi-effilab please, describe your bug.

alexnuttinck commented 4 years ago

Your values.yamlwould be useful too.

mvesi-effilab commented 4 years ago

HiAlex,

My expectation to enable https and make OAUTH authentication and run in Kubernetes cluster mode. Your helm chart working fine for http mode with out OAUTH. But what are changes need to be done to fix https and OAUTH. As I am new to this I couldn't fix it. Your help is greatly appreciated.

enyachoke commented 4 years ago

Hi think am also facing a similar issue. I need to setup nifi with OpenID Connect since the cluster already uses dex for authentication. The problem is nifi does not allow any form of authentication without https which means to setup a key store and activate ssl. My though was that setting this

properties:
  # use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
  externalSecure: true

will allow me to get around the issue since am terminating my ssl on ELB.

What I would like is a way of generating the keystore and truststore when deploying the chart this could be via an init container or something like that then storing them on a volume which will be mounted by the pod and accessible for use in nifi.properties. Just some thoughts am not sure how possible it is but I would appreciate any help or guidance. Currently I can't even get ldap authentication working

jody-devops commented 4 years ago

I am also having issues setting up LDAP authentication. It appears to be failing since keystore/truststore configurations aren't set. I made an assumption that the nifi-toolkit was being used to generate certificates and the nifi.properties keys for nifi.security.* would be set appropriately. Here are the values I have configured. @alexnuttinck Any assistance would be greatly appreciated!

Running on Kubernetes v1.15.5-rancher1-2

---
  persistence: 
    enabled: "true"
  replicaCount: "2"
  zookeeper: 
    image: 
      repository: "<local artifactory>"
  properties:
    httpPort:
    httpsPort: 9443
    clusterSecure: true
    needClientAuth: true
  auth:
    ldap:
      enabled: true
      host: <fqdn>
      searchBase: <OU=Users,DC=search,DC=location>
  service:
    loadBalancer:
      enabled: true
      httpsPort: 443
      annotations:
        metallb.universe.tf/allow-shared-ip: shared
      loadBalancerIP: <IP>
enyachoke commented 4 years ago

@jody-devops @alexnuttinck I have just worked on the sample yaml below to better explain my point. It runs an initContainer which mounts a pvc at /opt/certs downloads the nifi-toolkit generates the keystore/truststore and moves them to /opt/certs. The nifi container then mounts the same volume and you could expect the files to available in the pod. @jody-devops It 12 here so I will try make this changes to the chart tomorrow see if it works

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nifi
  namespace: nifi
  labels:
    name: nifi
    app: nifi
spec:
  serviceName: nifi
  volumeClaimTemplates:
      - metadata:
          name: certs
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 10Mi
  replicas: 3
  selector:
    matchLabels:
      app: nifi
  template:
    metadata:
      labels:
        app: nifi
    spec:
      initContainers:
      - name: install
        image: ubuntu
        command: ["/bin/sh","-c"]
        args: ["apt update; apt install -y wget tar default-jre-headless; wget https://www-eu.apache.org/dist/nifi/1.10.0/nifi-toolkit-1.10.0-bin.tar.gz; tar xvf nifi-toolkit-1.10.0-bin.tar.gz; cd nifi-toolkit-1.10.0; NIFI_FQDN_HOSTNAME=$(hostname); ./bin/tls-toolkit.sh standalone --hostnames $NIFI_FQDN_HOSTNAME --isOverwrite --trustStorePassword truststore --keyStorePassword nifi --keyStoreType jks; mv $NIFI_FQDN_HOSTNAME/* /opt/certs"]

        volumeMounts:
        - name: certs
          mountPath: /opt/certs
          subPath: certs
      # affinity:
      #   podAntiAffinity:
      #     requiredDuringSchedulingIgnoredDuringExecution:
      #       - topologyKey: "kubernetes.io/hostname"
      #         labelSelector:
      #           matchLabels:
      #             app: nifi
      containers:
      - name: nifi
        image: apache/nifi:latest
        volumeMounts:
        - name: certs
          mountPath: /opt/certs
          subPath: certs
        ports:
        - containerPort: 8080
          name: nifi
        - containerPort: 8082
          name: cluster
enyachoke commented 4 years ago

My new statefulset.yaml

---
apiVersion: {{ template "apache-nifi.statefulset.apiVersion" . }}
kind: StatefulSet
metadata:
  name: {{ template "apache-nifi.fullname" . }}
  labels:
    app: {{ include "apache-nifi.name" . | quote }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: {{ .Release.Name | quote }}
    heritage: {{ .Release.Service | quote }}
spec:
  podManagementPolicy: {{ .Values.sts.podManagementPolicy }}
  serviceName: {{ template "apache-nifi.fullname" . }}-headless
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ template "apache-nifi.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      annotations:
        security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
      labels:
        app: {{ include "apache-nifi.name" . | quote }}
        chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
        release: {{ .Release.Name | quote }}
        heritage: {{ .Release.Service | quote }}
    spec:
      {{- if eq .Values.sts.AntiAffinity "hard"}}
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - {{ include "apache-nifi.name" . | quote }}
              topologyKey: "kubernetes.io/hostname"
      {{- else if eq .Values.sts.AntiAffinity "soft"}}
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
             - weight: 1
               podAffinityTerm:
                 labelSelector:
                    matchExpressions:
                      - key: "component"
                        operator: In
                        values:
                         - {{ include "apache-nifi.name" . | quote }}
                 topologyKey: "kubernetes.io/hostname"
      {{- end}}
{{- if .Values.tolerations }}
      tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.securityContext }}
      securityContext:
{{ toYaml .Values.securityContext | indent 8 }}
{{- end }}
      initContainers:
      - name: install
        image: ubuntu
        command: ["/bin/sh","-c"]
        args: ["apt update; apt install -y wget tar default-jre-headless; wget https://www-eu.apache.org/dist/nifi/1.10.0/nifi-toolkit-1.10.0-bin.tar.gz; tar xvf nifi-toolkit-1.10.0-bin.tar.gz; cd nifi-toolkit-1.10.0; NIFI_FQDN_HOSTNAME=$(hostname); ./bin/tls-toolkit.sh standalone --hostnames $NIFI_FQDN_HOSTNAME --isOverwrite --trustStorePassword truststore --keyStorePassword nifi --keyStoreType jks; mv $NIFI_FQDN_HOSTNAME/* /opt/certs"]
        volumeMounts:
        - name: certs
          mountPath: /opt/certs
          subPath: certs
      - name: zookeeper
        image: busybox
        command:
        - sh
        - -c
        - |
          echo trying to contact {{ template "zookeeper.server" . }} {{ .Values.zookeeper.port }}
          until nc -vzw 1 {{ template "zookeeper.server" . }} {{ .Values.zookeeper.port }}; do
            echo "waiting for zookeeper..."
            sleep 2
          done
      {{- if .Values.image.pullSecret }}
      imagePullSecrets:
        - name: {{ .Values.image.pullSecret }}
      {{- end }}
      containers:
      - name: server
        imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        command:
        - bash
        - -ce
        - |
          prop_replace () {
            target_file=${NIFI_HOME}/conf/nifi.properties
            echo 'replacing target file ' ${target_file}
            sed -i -e "s|^$1=.*$|$1=$2|"  ${target_file}
          }

          FQDN=$(hostname -f)

          cat "${NIFI_HOME}/conf/nifi.temp" > "${NIFI_HOME}/conf/nifi.properties"

          if [[ $(grep $(hostname) conf/authorizers.temp) ]]; then
            cat "${NIFI_HOME}/conf/authorizers.temp" > "${NIFI_HOME}/conf/authorizers.xml"
          else
            cat "${NIFI_HOME}/conf/authorizers.empty" > "${NIFI_HOME}/conf/authorizers.xml"
          fi

          prop_replace nifi.remote.input.host ${FQDN}
          prop_replace nifi.cluster.node.address ${FQDN}
          prop_replace nifi.web.http.host ${FQDN}
          prop_replace nifi.zookeeper.connect.string ${NIFI_ZOOKEEPER_CONNECT_STRING}

          exec bin/nifi.sh run
        resources:
{{ toYaml .Values.resources | indent 10 }}
        ports:
{{- if .Values.properties.httpsPort }}
        - containerPort: {{ .Values.properties.httpsPort }}
{{- if .Values.sts.hostPort }}
          hostPort: {{ .Values.sts.hostPort }}
{{- end }}
          name: https
          protocol: TCP
{{- end }}
{{- if .Values.properties.httpPort }}
        - containerPort: {{ .Values.properties.httpPort }}
          name: http
          protocol: TCP
{{- end }}
        - containerPort: {{ .Values.properties.clusterPort }}
          name: cluster
          protocol: TCP
        env:
        - name: NIFI_ZOOKEEPER_CONNECT_STRING
          value: {{ template "zookeeper.url" . }}
        lifecycle:
          preStop:
            exec:
              command:
              - bash
              - -c
              - |
                $NIFI_HOME/bin/nifi.sh stop
{{- if .Values.postStart }}
          postStart:
            exec:
              command: ["/bin/sh", "-c", {{ .Values.postStart | quote }}]
{{- end }}
        readinessProbe:
          initialDelaySeconds: 60
          periodSeconds: 20
          exec:
            command:
            - bash
            - -c
            - |
{{- if .Values.properties.httpsPort }}
              curl -kv \
                --cert ${NIFI_BASE_DIR}/data/cert/admin/crt.pem --cert-type PEM \
                --key ${NIFI_BASE_DIR}/data/cert/admin/key.pem --key-type PEM \
                https://$(hostname -f):8443/nifi-api/controller/cluster > $NIFI_BASE_DIR/data/cluster.state
{{- else }}
              curl -kv \
                http://$(hostname -f):{{ .Values.properties.httpPort }}/nifi-api/controller/cluster > $NIFI_BASE_DIR/data/cluster.state
{{- end }}
              STATUS=$(jq -r ".cluster.nodes[] | select((.address==\"$(hostname -f)\") or .address==\"localhost\") | .status" $NIFI_BASE_DIR/data/cluster.state)

              if [[ ! $STATUS = "CONNECTED" ]]; then
                echo "Node not found with CONNECTED state. Full cluster state:"
                jq . $NIFI_BASE_DIR/data/cluster.state
                exit 1
              fi
        livenessProbe:
          initialDelaySeconds: 90
          periodSeconds: 60
          tcpSocket:
{{- if .Values.properties.httpsPort }}
            port: {{ .Values.properties.httpsPort }}
{{- else }}
            port: {{ .Values.properties.httpPort }}
{{- end }}
        volumeMounts:
          - name: certs
            mountPath: /opt/certs
            subPath: certs
          - name: "data"
            mountPath: /opt/nifi/data
          - name: "flowfile-repository"
            mountPath: /opt/nifi/flowfile_repository
          - name: "content-repository"
            mountPath: /opt/nifi/content_repository
          - name: "provenance-repository"
            mountPath: /opt/nifi/provenance_repository
          - name: "logs"
            mountPath: /opt/nifi/nifi-current/logs
          - name: "bootstrap-conf"
            mountPath: /opt/nifi/nifi-current/conf/bootstrap.conf
            subPath: "bootstrap.conf"
          - name: "nifi-properties"
            mountPath: /opt/nifi/nifi-current/conf/nifi.temp
            subPath: "nifi.temp"
          - name: "authorizers-temp"
            mountPath: /opt/nifi/nifi-current/conf/authorizers.temp
            subPath: "authorizers.temp"
          - name: "authorizers-empty"
            mountPath: /opt/nifi/nifi-current/conf/authorizers.empty
            subPath: "authorizers.empty"
          - name: "bootstrap-notification-services-xml"
            mountPath: /opt/nifi/nifi-current/conf/bootstrap-notification-services.xml
            subPath: "bootstrap-notification-services.xml"
          - name: "logback-xml"
            mountPath: /opt/nifi/nifi-current/conf/logback.xml
            subPath: "logback.xml"
          - name: "login-identity-providers-xml"
            mountPath: /opt/nifi/nifi-current/conf/login-identity-providers.xml
            subPath: "login-identity-providers.xml"
          - name: "state-management-xml"
            mountPath: /opt/nifi/nifi-current/conf/state-management.xml
            subPath: "state-management.xml"
          - name: "zookeeper-properties"
            mountPath: /opt/nifi/nifi-current/conf/zookeeper.properties
            subPath: "zookeeper.properties"
          {{- range $secret := .Values.secrets }}
            {{- if $secret.mountPath }}
              {{- if $secret.keys }}
                {{- range $key := $secret.keys }}
          - name: {{ include "apache-nifi.fullname" $ }}-{{ $secret.name }}
            mountPath: {{ $secret.mountPath }}/{{ $key }}
            subPath: {{ $key }}
            readOnly: true
                {{- end }}
              {{- else }}
          - name: {{ include "apache-nifi.fullname" $ }}-{{ $secret.name }}
            mountPath: {{ $secret.mountPath }}
            readOnly: true
              {{- end }}
            {{- end }}
          {{- end }}
      - name: app-log
        image: {{ .Values.sidecar.image }}
        args: [tail, -n+1, -F, /var/log/nifi-app.log]
        resources:
{{ toYaml .Values.logresources | indent 10 }}
        volumeMounts:
        - name: logs
          mountPath: /var/log
      - name: bootstrap-log
        image: {{ .Values.sidecar.image }}
        args: [tail, -n+1, -F, /var/log/nifi-bootstrap.log]
        resources:
{{ toYaml .Values.logresources | indent 10 }}
        volumeMounts:
        - name: logs
          mountPath: /var/log
      - name: user-log
        image: {{ .Values.sidecar.image }}
        args: [tail, -n+1, -F, /var/log/nifi-user.log]
        resources:
{{ toYaml .Values.logresources | indent 10 }}
        volumeMounts:
        - name: logs
          mountPath: /var/log
      volumes:
      - name: "bootstrap-conf"
        configMap:
          name: {{ template "apache-nifi.fullname" . }}-config
          items:
            - key: "bootstrap.conf"
              path: "bootstrap.conf"
      - name: "nifi-properties"
        configMap:
          name: {{ template "apache-nifi.fullname" . }}-config
          items:
            - key: "nifi.properties"
              path: "nifi.temp"
      - name: "authorizers-temp"
        configMap:
          name: {{ template "apache-nifi.fullname" . }}-config
          items:
            - key: "authorizers.xml"
              path: "authorizers.temp"
      - name: "authorizers-empty"
        configMap:
          name: {{ template "apache-nifi.fullname" . }}-config
          items:
            - key: "authorizers-empty.xml"
              path: "authorizers.empty"
      - name: "bootstrap-notification-services-xml"
        configMap:
          name: {{ template "apache-nifi.fullname" . }}-config
          items:
            - key: "bootstrap-notification-services.xml"
              path: "bootstrap-notification-services.xml"
      - name: "logback-xml"
        configMap:
          name: {{ template "apache-nifi.fullname" . }}-config
          items:
            - key: "logback.xml"
              path: "logback.xml"
      - name: "login-identity-providers-xml"
        configMap:
          name: {{ template "apache-nifi.fullname" . }}-config
          items:
            - key: "login-identity-providers.xml"
              path: "login-identity-providers.xml"
      - name: "state-management-xml"
        configMap:
          name: {{ template "apache-nifi.fullname" . }}-config
          items:
            - key: "state-management.xml"
              path: "state-management.xml"
      - name: "zookeeper-properties"
        configMap:
          name: {{ template "apache-nifi.fullname" . }}-config
          items:
            - key: "zookeeper.properties"
              path: "zookeeper.properties"
      {{- range .Values.secrets }}
      - name: {{ include "apache-nifi.fullname" $ }}-{{ .name }}
        secret:
          secretName: {{ .name }}
      {{- end }}
{{- if not .Values.persistence.enabled }}
      - name: data
        emptyDir: {}
      - name: flowfile-repository
        emptyDir: {}
      - name: content-repository
        emptyDir: {}
      - name: provenance-repository
        emptyDir: {}
      - name: logs
        emptyDir: {}
{{- end }}
{{- if .Values.persistence.enabled }}
  volumeClaimTemplates:
    - metadata:
        name: certs
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 1Gi
    - metadata:
        name: data
      spec:
        accessModes:
        {{- range .Values.persistence.accessModes }}
          - {{ . | quote }}
        {{- end }}
        storageClassName: {{ .Values.persistence.storageClass | quote }}
        resources:
          requests:
            storage: {{ .Values.persistence.dataStorage.size }}
    - metadata:
        name: flowfile-repository
      spec:
        accessModes:
        {{- range .Values.persistence.accessModes }}
          - {{ . | quote }}
        {{- end }}
        storageClassName: {{ .Values.persistence.storageClass | quote }}
        resources:
          requests:
            storage: {{ .Values.persistence.flowfileRepoStorage.size }}
    - metadata:
        name: content-repository
      spec:
        accessModes:
        {{- range .Values.persistence.accessModes }}
          - {{ . | quote }}
        {{- end }}
        storageClassName: {{ .Values.persistence.storageClass | quote }}
        resources:
          requests:
            storage: {{ .Values.persistence.contentRepoStorage.size }}
    - metadata:
        name: provenance-repository
      spec:
        accessModes:
        {{- range .Values.persistence.accessModes }}
          - {{ . | quote }}
        {{- end }}
        storageClassName: {{ .Values.persistence.storageClass | quote }}
        resources:
          requests:
            storage: {{ .Values.persistence.provenanceRepoStorage.size }}
    - metadata:
        name: logs
      spec:
        accessModes:
        {{- range .Values.persistence.accessModes }}
          - {{ . | quote }}
        {{- end }}
        storageClassName: {{ .Values.persistence.storageClass | quote }}
        resources:
          requests:
            storage: {{ .Values.persistence.logStorage.size }}
{{- end }}
jody-devops commented 4 years ago

@enyachoke Thanks for posting your solution. I'm still new to Kub/Docker but will try to grok what you've posted. One thought, it may be easier for the install image to also be nifi since that will ensure the correct version of nifi-toolkit is available without reliance on apt installs and wget that assumes the internet accessible.

enyachoke commented 4 years ago

@jody-devops not sure it contains the toolkit but will check. Also this is not tested just an example. Am trying to work on a PR

octopyth commented 4 years ago

@enyachoke thanks for working on this, I have been struggling with the implementation of the nifi security. I have tried to limit the IPs who have access. I used loadBalancerSourceRanges in values.yaml, but for some reason it didn't work (about to open a separate bug for that). I am fully committed to help and update the readme.md once I get the security sorted, but so far it looks like a dead-end. You are the ray of light right now. 👍

jdesroch commented 4 years ago

@jody-devops not sure it contains the toolkit but will check. Also this is not tested just an example. Am trying to work on a PR

@enyachoke I think you're on the right track. nifi-toolkit is in the apache/nifi image at /opt/nifi/nifi-toolkit-current.

enyachoke commented 4 years ago

@jdesroch I think I figured it out and switched to to nifi-toolkit server/client setup where I launch a ca pod from which cluster pods request certs from to enable ssl and authenticate with ldap.

jdesroch commented 4 years ago

@enyachoke thank you for working on this. How's it going? Do you need any assistance?

sc7565 commented 4 years ago

hi @enyachoke can you share the PR, it would be great, as we are also facing issue with SSL setup with ldap

enyachoke commented 4 years ago

Sorry guys I have not really had time to prepare a proper PR. If you need to see the changes I made a PR https://github.com/cetic/helm-nifi/pull/36. Am swarmed and I kind proved to my self this can actually work and am not actively look to make this mergeable. Take a look at it and hope it helps. The reason I say I can't really consider this work a merge candidate it that I have not figured out a way to generate certs for new nodes when someone scales the deployment.

enyachoke commented 4 years ago

@jdesroch You can make comments on the PR on what I should change and I can try make the changes as soon as am able to. Thanks

sc7565 commented 4 years ago

Thanks you very much @enyachoke , with above PR... got this error

Can you give pointers or readme on how to turn on the https? So, far I tried this changes to my values.yaml along with above PR

replicaCount: 1 .... securityContext: runAsUser: 0 .... properties: mechanism externalSecure: false isNode: true httpPort: null metricsPort: 9192 httpsPort: 8443 clusterPort: 6007 clusterSecure: true needClientAuth: false provenanceStorage: "8 GB" siteToSite: secure: false port: 10000 authorizer: managed-authorizer

updated nifi.properties nifi.remote.input.http.enabled=false nifi.security.keystore=/opt/certs/keystore.jks nifi.security.keystoreType=jks nifi.security.keystorePasswd=nifi nifi.security.keyPasswd=nifi nifi.security.truststore=/opt/certs/truststore.jks nifi.security.truststoreType=jks nifi.security.truststorePasswd=truststore

3/4 pods are online..

logs from app-log 2019-12-18 15:35:40,602 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 1 heartbeats in 17978 nanos 2019-12-18 15:35:41,789 INFO [Process Cluster Protocol Request-7] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 237be758-0db0-4c22-850a-b54c5a3e653b (type=HEARTBEAT, length=2645 bytes) from nifi-qad-0.nifi-qad-headless.logging.svc.cluster.local:8443 in 148 millis 2019-12-18 15:35:41,790 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2019-12-18 15:35:41,638 and sent to nifi-qad-0.nifi-qad-headless.logging.svc.cluster.local:6007 at 2019-12-18 15:35:41,790; send took 152 millis

Seems something is not letting pods go 4/4

enyachoke commented 4 years ago

@sc7565 if it helps here is my values override. I already have ldap setup. I believe securityContext: runAsUser: 0 is bad but thats me just being lazy honestly, hence the reason I noted I need to put a little bit more time into this. Also tested this with a replica count of 3 am sure it won't work with 4 or 2 this is incomplete work and I think I will close the PR and reopen it later.

replicaCount: 3
image:
  repository: apache/nifi
  tag: "1.9.2"
service:
  headless:
    type: ClusterIP
  loadBalancer:
    enabled: true
    type: ClusterIP

ingress:
  enabled: true
  annotations:
     kubernetes.io/ingress.class: nginx
     nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - nifi.example.com
auth:
  ldap:
    enabled: true
    host: ldap://auth.lg-apps.com
    searchBase: cn=users,ou=Technology,dc=auth,dc=lg-apps,dc=com
    searchFilter: uid={0}
persistence:
  enabled: true
  dataStorage: 
    size: 5Gi
properties:
  # use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
  externalSecure: true
Subv commented 3 years ago

As of #76 and #93 oidc and TLS work out of the box

alexnuttinck commented 3 years ago

We can close this issue.