passbolt / charts-passbolt

Helm charts to run Passbolt on Kubernetes. No strings attached charts to run the open source password manager for teams!
https://passbolt.com
GNU Affero General Public License v3.0
40 stars 27 forks source link

Passbolt blank page after authentication #66

Closed SmartGuyy closed 8 months ago

SmartGuyy commented 8 months ago

Hello,

I'm trying to log in with the first user i created with : kubectl exec -it passbolt-depl-srv-xxx-n infrastructure -- su -c "bin/cake passbolt register_user -u xxx@xxx.com -f admin -l admin -r admin" -s /bin/bash www-data

Unfortunately after few trials and after looking at logs (not much information), i still have a blank page. I already went in community forums and found that most people resolved their problems by simply setting "APP_FULL_BASE_URL" variable. But i already have it set up since the beginning.

Note : i'm exposing the Kubernetes service with a Traefik ingressroute.

My passbolt configuration being the following (anything else concerning Passbolt parameters not defined is default) :

passboltEnv:
  plain:
    APP_FULL_BASE_URL: https://${HELM_CHART_NAME}.${PUBLIC_URL_SUFFIX}
    # -- Kubectl download command
    KUBECTL_DOWNLOAD_CMD: ${KUBECTL_DOWNLOAD_CMD}
    # -- Configure passbolt default email from
    EMAIL_DEFAULT_FROM: ${EMAIL_DEFAULT_FROM}
    # -- Configure passbolt default email host
    EMAIL_TRANSPORT_DEFAULT_HOST: ${EMAIL_TRANSPORT_DEFAULT_HOST}
    EMAIL_TRANSPORT_DEFAULT_PORT: ${EMAIL_TRANSPORT_DEFAULT_PORT}
    # -- Toggle passbolt tls
    PASSBOLT_SECURITY_SMTP_SETTINGS_ENDPOINTS_DISABLED: "true"
    DATASOURCES_QUOTE_IDENTIFIER: "true"
    PASSBOLT_REGISTRATION_PUBLIC: "false"
    PASSBOLT_SSL_FORCE: "false"

--> looks like since container is executed with root, i can't bypass this and perform a manual health check

When trying to log in, i only get HTTP 200. Tried to reset my cookies and restart, but didn't succeed.
![Screenshot from 2023-11-23 12-29-19](https://github.com/passbolt/charts-passbolt/assets/16286196/331d4b6c-6732-43a2-834a-6a0b94d6e8a8)
Tecnobutrul commented 8 months ago

Hi @SmartGuyy,

This issue could happen when the ingress controller doesn't use the right http protocol (https) when forwarding the traffic to the passbolt containers. In the case of traefik you can do that by adding a annotation on the ingress rule:

ingress:
  # -- Enable passbolt ingress
  enabled: true
  # -- Configure passbolt ingress annotations
  annotations: 
    traefik.ingress.kubernetes.io/router.entrypoints: websecure

Regarding the healthcheck command, you can run it as www-data like this:

su -c "source /etc/environment; bin/cake passbolt healthcheck" -s /bin/bash www-data

Hope these tips help you to get it running.

SmartGuyy commented 8 months ago

Thank you again for you fast reply, i appreciate ! The health check correctly works with that command, but for the ingressroute i already have that websecure (we are already using Traefik ingressroutes for all our services/apps in HTTPS) :

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: passbolt-public
  namespace: infrastructure
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(`passbolt.xxx.com`)
      middlewares:
        - name: headers
          namespace: infrastructure
        - name: whitelist-xxx
          namespace: infrastructure
      services:
        - kind: Service
          name: passbolt
          namespace: infrastructure
          port: 80
  tls:
    options:
      name: tlsoptions
      namespace: infrastructure
    secretName: ssl-public-certificate
SmartGuyy commented 8 months ago

Ok i just found the issue : https://github.com/passbolt/charts-passbolt/blob/dcc4cf0fb8ba8ebc80b23b9258a58abea171df96/templates/deployment.yaml#L16

This should be hardcoded to 1 because on my side it was set to 2 (by default) leading to some part of my requests going from one pod to another.

I noticed that by trying to authenticate and watching the logs, but i couldn't find all the requests on one side, rather some part on one pod, the other on another...

Now i'm able to see the UI correctly !

Thanks @Tecnobutrul

Tecnobutrul commented 8 months ago

It is by default set to 2 since by default we add a redis cluster to handle the shared sessions. Since you disabled the redis, you must deploy just one container. I am glad that you manage to fix it.