nextcloud / helm

A community maintained helm chart for deploying Nextcloud on Kubernetes.
GNU Affero General Public License v3.0
330 stars 268 forks source link

Changing nextcloud.host vars results in crashloopbackoff #617

Open Syntax3rror404 opened 2 months ago

Syntax3rror404 commented 2 months ago

Describe your Issue

Changing the helm value responsible for the ingress hostname and the nextcloud internal host results in crashloopbackoff.

After changing back the nextcloud.host to the old value the deployment come back online. But this help in this case because I and I'm sure many other needs to change the hostname sometimes because of a migration to a other network etc.

Changing

nextcloud:
  host: mycoolserver.example.com

to

nextcloud:
  host: mycoolserver.newnetwork.com

I need to change this because I want to use my other ingress controller which can be accessed from the WAN.

Logs and Errors

crashloopbackoff in the nexcloud container inside the nextcloud pod. Nothing helpfull inside the log. The log shows no indicator about this issue. It looks simply like a exit code 1

Describe your Environment

---
{{- if .Values.spec.nextcloud.enabled }}
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nextcloud
  namespace: {{ .Values.spec.argocdNamespace }}
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    namespace: nextcloud
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    chart: nextcloud
    path: '.'
    repoURL: {{ .Values.spec.nextcloud.repoURL }}
    targetRevision: {{ .Values.spec.nextcloud.targetRevision }}
    helm:
      values: |
        nextcloud:
          host: {{ .Values.spec.nextcloud.host }}
          username: admin
          password: {{ .Values.spec.nextcloud.nextcloudAdminPW }}
          containerPort: 80
          datadir: /var/www/html/data
          configs:
            custom-overwrite.config.php: |-
              <?php
              $CONFIG = array (
                'overwrite.cli.url' => 'https://nextcloud.nextcloud.svc.cluster.local',
                'overwriteprotocol' => 'https',
              );
            proxy.config.php: |-
              <?php
              $CONFIG = array (
                'trusted_proxies' => array(
                  0 => '127.0.0.1',
                  1 => '10.0.0.0/8',
                ),
                'forwarded_for_headers' => array('HTTP_X_FORWARDED_FOR'),
              );

        cronjob:
          enabled: true

        persistence:
          enabled: true
          size: 150Gi
          storageClass: "{{ .Values.spec.nextcloud.storageClass }}"

        image:
          flavor: fpm

        nginx:
          enabled: true

        externalDatabase:
          enabled: true
          type: mysql
          host: nextcloud-mariadb.svc
          user: nextcloud
          password: "{{ .Values.spec.nextcloud.mariadbPW }}"
          database: nextcloud

        internalDatabase:
          enabled: false

        mariadb:
          enabled: true
          primary:
            persistence:
              enabled: true
              storageClass: "{{ .Values.spec.nextcloud.storageClass }}"
          auth:
            database: nextcloud
            username: nextcloud
            password: "{{ .Values.spec.nextcloud.mariadbPW }}"
            existingSecret: ""

        ingress:
          enabled: true
          labels: {}
          path: /
          pathType: Prefix
          className: nginx
          annotations:
            # cert-manager.io/cluster-issuer: letsencrypt-prod
            cert-manager.io/cluster-issuer: selfsigned-issuer
            nginx.ingress.kubernetes.io/enable-cors: "true"
            nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For"
            nginx.ingress.kubernetes.io/server-snippet: |-
              server_tokens off;
              proxy_hide_header X-Powered-By;
              rewrite ^/.well-known/webfinger /index.php/.well-known/webfinger last;
              rewrite ^/.well-known/nodeinfo /index.php/.well-known/nodeinfo last;
              rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
              rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
              location = /.well-known/carddav {
                return 301 $scheme://$host/remote.php/dav;
              }
              location = /.well-known/caldav {
                return 301 $scheme://$host/remote.php/dav;
              }
              location = /robots.txt {
                allow all;
                log_not_found off;
                access_log off;
              }
              location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
                deny all;
              }
              location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
                deny all;
              }

          tls:
            - secretName: nextcloud-tls
              hosts:
                - {{ .Values.spec.nextcloud.host }}

  syncPolicy:
    automated:
      selfHeal: true
      allowEmpty: true
    syncOptions:
    - CreateNamespace=true
{{- end }}
provokateurin commented 2 months ago

I'm not entirely sure what is going on, but after looking at the usage of nextcloud.host it seems that we only really use it for the NEXTCLOUD_TRUSTED_DOMAINS. Now the probes use the host too, so if the server was not picking up the new host then the probes will fail. Can you check your config.php and see if there is a trusted_domains set? In that case it probably has precedence over the environment variable and is not using the new value. If you remove that it should work again :tm:

PainOchoco commented 2 days ago

Hi, I have the exact same issue. Were you able to apply the host setting without the app loop crashing?