gissilabs / charts

Apache License 2.0
38 stars 22 forks source link

feat: Add options to enable TLS using sidecars #39

Closed JakWai01 closed 5 months ago

JakWai01 commented 1 year ago

This PR allow customization of the rocketPort. This was necessary, since we wanted to use a reverse proxy without using rocket for TLS in our kubernetes cluster.

Additionally, this PR also alters the livenessprobe in order to use a container-internal connection and we introduced a new option that allows specifying additional volumes. This is necessary in order to properly work with sidecars.

Co-authored-by: @p-fruck

sgissi commented 1 year ago

Hi @JakWai01, thank you for the PR and sorry for the delay on reviewing it. There are 3 changes proposed: 1) Choice of port for Rocket, 2) Probes to use fixed "/" path directly into Rocket, 3) Allow for extra volumes to be used by sidecars.

  1. I understand the change is to free up 8080, so it can be taken by a sidecar that will handle TLS. Keep in mind that Rocket, by default does not handle TLS and this chart doesn't have any configuration options to enable it. The recommendation is to run a reverse proxy for TLS (https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS), typically using an Ingress Controller. Clients should not connect directly to the Pod as the IP varies, instead they should use the Vaultwarden Service. For your use case to work (sidecar running TLS instead of ingress controller), I recommend proposing a change to customize the Vaultwarden Service to use a custom target port (https://github.com/gissilabs/charts/blob/master/vaultwarden/templates/service.yaml#L18). That way, the sidecar can add a new name under the sidecar "ports" to be reference by the custom target port. The clients will then access via Vaultwarden Service to the sidecar custom port instead of Rocket's 8080.

  2. The chart used to have "/" until I learned that Vaultwarden can run under a sub-path (https://github.com/gissilabs/charts/issues/27). Reverting that change will break installations. If the vaultwarden.domain option doesn't contain a sub-path (https://site.abc/subpath/), domainSubpath will actually be "/". Can you clarify your use case that requires a hard-coded "/"?

  3. That is a great idea, I will likely expand that to other charts too.

Let me know if I misunderstood your use case!

sgissi commented 1 year ago

@JakWai01 Do you have interest on updating this patch still?

JakWai01 commented 1 year ago

I do. However, I am unsure how to proceed. I feel like it's not desired to implement all of these changes (which I can understand). Let's maybe discuss which of the introduced things we could actually merge and which should be left out.

sgissi commented 1 year ago

I want to support your use case for sure :)

For the Rocket port change, instead of changing the actual container port, my recommendation is to make the service target port name customizable via the chart. The default must be the current value ("http"). With that, in your sidecar configuration, you can add a new port to the Pod and point the service to it. Would that work for your use case? If not, can you detail why?

For the change of the probe path, I don't understand the use case. It will be "/" unless you set vaultwarden.domain with a URL parameter with a subdir (e.g. http://mydomain.com/vaultwarden). If you probe "/" when URL is set with a subdir, the probe won't work because Vaultwarden needs the subdir in the request. What was the issue you had? Can you share the value you used for vaultwarden.domain?

The extra volumes part, I have no issues at all.

p-fruck commented 1 year ago

The issue with the HTTP port is that it is also referenced in the ingress, so that the ingress always points to the port rocket uses. This is not what we want. We are using the sidecar because we want to have cluster internal TLS to our custom ingress resource which then terminates and reencrypts TLS. The rocketPort can still be used to change the port that this charts ingress uses, but the internal container port is now hardcoded to 8080 so that we can point this charts ingress to 8443 where our sidecars listens for TLS.

The subdomain was removed because in our testing it seemed like the liveness probe used the external domain to check if the service is still running, which wasn't the case since our external domain wasn't resolved. This should not be the case, a container liveness probe should run only locally inside the container without requiring any outside connection. Please let us know if this was due to a misconfiguration on our side.

sgissi commented 1 year ago

The issue with the HTTP port is that it is also referenced in the ingress, so that the ingress always points to the port rocket uses.

The flow is Users -> Ingress or IngressRoute -> Service -> Pod. On each part, the TCP port is configurable. Usually Ingress listens on port 443, Service on port 80 (https://github.com/gissilabs/charts/blob/master/vaultwarden/values.yaml#L159) and Pod is 8080 as you noticed. By changing the Service as I explained in the previous comment, the data from Ingress will reach the pod on whatever port you define via the Service.

The subdomain was removed because in our testing it seemed like the liveness probe used the external domain to check if the service is still running

Interesting, it shouldn't be the case, the chart should strip the domain and keep the path only (for example: https://my.site.local/sub/path should end up with "/sub/path". Can you share your chart values (obfuscated is fine)?

JakWai01 commented 1 year ago

This is our current values.yaml (inside of a configmap):

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: bitwarden-configmap
  namespace: bitwarden
data:
  values.yaml: |
    database:
      type: postgresql
      existingSecret: bitwarden-secret

    vaultwarden:
      domain: https://CENSORED
      allowSignups: true
      signupDomains:
        - CENSORED
      verifySignup: true
      allowInvitation: true
      defaultInviteName: "CENSORED"
      showPasswordHint: true
      enableWebsockets: true
      enableWebVault: true
      orgCreationUsers: all
      rocketPort: 8081
      extraEnv:
        ROCKET_ADDRESS: "::"

      admin:
        enabled: true
        existingSecret: bitwarden-secret

      smtp:
        enabled: true
        host: "CENSORED"
        from: "CENSORED"
        fromName: "CENSORED"
        ssl: true

      yubico:
        enabled: false

      log:
        file: ""
        level: error

    service:
      type: ClusterIP
      httpPort: 80
      websocketPort: 3012
      externalTrafficPolicy: Cluster

    ingress:
      enabled: true
      host: CENSORED
      annotations:
        cert-manager.io/cluster-issuer: "letsencrypt"
        nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
        kubernetes.io/ingress.class: "nginx"
        ingress.kubernetes.io/ssl-redirect: "False"
        nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
        nginx.ingress.kubernetes.io/proxy-ssl-name: "bitwarden"
        nginx.ingress.kubernetes.io/proxy-ssl-protocols: "TLSv1.3"
        nginx.ingress.kubernetes.io/proxy-ssl-secret: "bitwarden/bitwarden-ssl-cert"
      tls:
       - secretName: bitwardenrs-tls
         hosts:
           - CENSORED

    imagePullSecrets: []
    nameOverride: ""
    fullnameOverride: bitwarden

    replicaCount: 1
    image:
      tag: "1.25.2"
    serviceAccount:
      create: false
      annotations: {}
      name: ""

    podSecurityContext:
      fsGroup: 65534

    securityContext:
      runAsUser: 65534
      runAsGroup: 65534

    resources:
      limits:
        cpu: 1
        memory: 256Mi
      requests:
        cpu: 100m
        memory: 128Mi

    extraVolumes:
      - name: nginx-config-volume
        configMap:
          name: bitwarden-nginx-configmap
      - name: ssl-cert-volume
        secret:
          secretName: bitwarden-ssl-cert

    sidecars:
      - name: bitwarden-nginx
        image: nginxinc/nginx-unprivileged:1.23-alpine
        imagePullPolicy: Always
        volumeMounts:
          - name: nginx-config-volume
            mountPath: "/etc/nginx/conf.d/"
            readOnly: true
          - name: ssl-cert-volume
            mountPath: "/etc/nginx/certificates/"
            readOnly: true
sgissi commented 1 year ago

Thanks for sharing your configuration. About livenessProbe/readinessProbe, they are executed by kubelet connecting directly to the local container port. When vaultwarden.domain is set to a URL without sub-paths (https://bitwarden.example.com and not https://example.com/bitwarden) if will be set to "/" and talk directly to Rocket on port 8080. If you are getting errors, feel free to open an issue with as much details/logs/configs as possible. My test setup doesn't have a sub-path and I can see the right path on the probes.

If you want to run a quick test, change your sidecar to include a custom port (I'm assuming your configuration makes Nginx run on port 8443):

    sidecars:
      - name: bitwarden-nginx
        image: nginxinc/nginx-unprivileged:1.23-alpine
        imagePullPolicy: Always
        volumeMounts:
          - name: nginx-config-volume
            mountPath: "/etc/nginx/conf.d/"
            readOnly: true
          - name: ssl-cert-volume
            mountPath: "/etc/nginx/certificates/"
            readOnly: true
        ports:
          - name: https
            containerPort: 8443

This will create a new port named "https" on the Pod. Once that is running, edit the Vaultwarden Service so that the targetPort is set to "https" (matching the new port just created) instead of "http". As your ingress already have annotations to use HTTPS as backend, it should now connect to the Vaultwareden Service via port 80 but using HTTPS and hit Nginx side car on port 8443. Let me know if it doesn't work.

sgissi commented 5 months ago

@JakWai01 Just cleaning up the PRs. Did you get this to work? additionalVolumes is available already.

sgissi commented 5 months ago

It seems like this is no longer needed. Please send a new PR if there are any gaps.