nextcloud / helm

A community maintained helm chart for deploying Nextcloud on Kubernetes.
GNU Affero General Public License v3.0
315 stars 260 forks source link

AH00558: apache2: Could not reliably determine the server's fully qualified domain name #113

Open steled opened 3 years ago

steled commented 3 years ago

Describe the bug After applying the helm chart I see the following error in the logs:

$ kubectl logs -n nextcloud nextcloud-7bd4647bbf-kc4wp -f
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.0.125. Set the 'ServerN                                         ame' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.0.125. Set the 'ServerN                                         ame' directive globally to suppress this message

Version of Helm and Kubernetes:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/arm64"}

$ helm version
version.BuildInfo{Version:"v3.5.3", GitCommit:"041ce5a2c17a58be0fcd5f5e16fb3e7e95fea622", GitTreeState:"dirty", GoVersion:"go1.15.8"}

Which chart: nextcloud:2.6.1

What happened: The error AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.0.125. occurs.

What you expected to happen: normal startup of nextcloud without an error message

How to reproduce it (as minimally and precisely as possible): apply the helm chart

Anything else we need to know: As a workaround I can add the following lines of code to my values.yaml file:

lifecycle:
  postStartCommand: ["/bin/sh", "-c", "echo \"ServerName 172.16.4.35\" | tee -a /etc/apache2/apache2.conf"]

But this is not how I expect that it should work like.

magsol commented 2 years ago

I have also encountered this error. I am using nextcloud:21.0.4-apache arm64 image on a k3s cluster running on 5x Raspberry Pis. kubectl version 1.21.5, helm version 3.6.3.

magsol commented 2 years ago

@steled What IP address are you putting in for ServerName (172.16.4.35)? Is that the IP of your load balancer? Or the external IP?

steled commented 2 years ago

This is the IP for nextcloud I configured at the loadbalancer.

magsol commented 2 years ago

@steled Sorry--I'm new to this--where did you configure this IP for the nextcloud loadbalancer?

hannesknutsson commented 2 years ago

Did you solve this? My Nextcloud on k8s deployed using the official Helm chart version 2.11.3 has failed and is displaying the above error in the logs.

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.1.227. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.1.227. Set the 'ServerName' directive globally to suppress this message
[Mon Feb 21 20:51:55.205558 2022] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.51 (Debian) PHP/8.0.14 configured -- resuming normal operations
[Mon Feb 21 20:51:55.205593 2022] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

I haven't changed anything in my values.yaml for weeks, so I have no idea why it is suddenly failing.

magsol commented 2 years ago

@hannesknutsson I have not, though in all fairness I haven't been able to come back to this since my last update; work has been overwhelming.

steled commented 2 years ago

@magsol sorry for my very late response...

See below a snippet for the service configuration in my values.yaml file:

service:
  type: LoadBalancer
  port: 8080
  loadBalancerIP: 172.16.4.35
J0han3s commented 2 years ago

I have same issue

DreamingRaven commented 1 year ago

I have the same issue too:

with values:

startupProbe:
  enabled: true

hpa:
  enable: true
  minPods: 3
  maxPods: 5

image:
  repository: nextcloud
  tag: 22-apache
  pullPolicy: Always

nextcloud:
  host: nextcloud.<**MYDOMAIN**>
  existingSecret:
    enabled: true
    secretName: nextcloud-user
    usernameKey: username
    passwordKey: password

internalDatabase:
  enabled: false

externalDatabase:
  enabled: true
  host: "nextcloud-mariadb-helm:3306"
  existingSecret:
    secretName: mariadb-user
    usernameKey: mariadb-username
    passwordKey: mariadb-password

persistence:
  enabled: true
  size: 3T
  storageClass: nextcloud-main
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.85.191.90. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.85.191.90. Set the 'ServerName' directive globally to suppress this message
[Thu Oct 06 11:40:31.646512 2022] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.54 (Debian) PHP/8.0.22 configured -- resuming normal operations
[Thu Oct 06 11:40:31.646532 2022] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
192.168.1.134 - - [06/Oct/2022:11:41:08 +0000] "GET /status.php HTTP/1.1" 500 425 "-" "kube-probe/1.25"
192.168.1.134 - - [06/Oct/2022:11:41:18 +0000] "GET /status.php HTTP/1.1" 500 425 "-" "kube-probe/1.25"
...

Is there a nicer way to pass in the server name rather than a postStartCommand hook? Also why was this only the case recently? Has something changed in source?

EDIT: although I realize now that this issue was reported ages ago now, so maybe not so recent after all hmm.

DreamingRaven commented 1 year ago

Whoops never mind ignore me. It was actually MariaDB. That ServerName was a red herring. I sorted mariadb which was a version bitnami dropped, now that its upgraded everything works fine.

jessebot commented 1 year ago

This issue can be a lot of things, and there's a few different people needing help, so if anyone is still having trouble, if you could please post as much of your values.yaml as possible as well as the current helm chart version you're using? It would help a lot in troubleshooting this.

Regardless, I'll leave this open for a good while to allow @steled, @magsol, @hannesknutsson, and @gkchim to follow up.

steled commented 1 year ago

@jessebot I'm using the helm chart version 3.1.2, see below my values.yaml file:

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: duckdns-webhook-cert-manager-webhook-duckdns-${environment}
    nginx.ingress.kubernetes.io/proxy-body-size: 32M
    nginx.ingress.kubernetes.io/server-snippet: |-
      location = /.well-known/carddav {
        return 301 $scheme://$host/remote.php/dav;
      }
      location = /.well-known/caldav {
        return 301 $scheme://$host/remote.php/dav;
      }      
  tls:
    - secretName: nextcloud-tls
      hosts:
        - ${nextcloud_domain}

#securityContext:
#  runAsGroup: 1000
#  runAsNonRoot: true
#  runAsUser: 1000

lifecycle:
  postStartCommand: ["/bin/sh", "-c", "echo \"ServerName ${ip_address}\" | tee -a /etc/apache2/apache2.conf"]

nextcloud:
  host: ${nextcloud_domain}

  existingSecret:
    enabled: true
    secretName: nextcloud-secret

  configs:
    custom.config.php: |-
      <?php
      $CONFIG = array (
        'encryption.legacy_format_support' => false,
        'default_phone_region' => 'DE',
      );      

  extraEnv:
    - name: OVERWRITEHOST
      value: ${nextcloud_domain}
    - name: OVERWRITEPROTOCOL
      value: https
    - name: TRUSTED_PROXIES
      value: ${nextcloud_proxies}

  mail:
    enabled: true
    fromAddress: ${mail_fromaddress}
    domain: ${mail_domain}
    smtp:
      host: ${smtp_host}
      secure: ${smtp_secure}
      port: ${smtp_port}
      authtype: ${smtp_authtype}

cronjob:
  enabled: true

internalDatabase:
  enabled: false

externalDatabase:
  enabled: true
  type: postgresql
  host: nextcloud-postgresql.nextcloud.svc.cluster.local:5432
  user: ${externaldatabase_user}
  password: ${externaldatabase_password}
  database: ${externaldatabase_database}

postgresql:
  auth:
    postgresPassword: ${postgresql_postgresqladminpassword}
    username: ${postgresql_postgresqlusername}
    password: ${postgresql_postgresqlpassword}
    database: ${postgresql_postgresqldatabase}
  image:
    repository: postgres
    # check version here: 
    # - https://github.com/nextcloud/helm/blob/master/charts/nextcloud/Chart.yaml
    # - https://github.com/bitnami/charts/blob/master/bitnami/postgresql/Chart.yaml
    # - https://hub.docker.com/_/postgres?tab=tags
    tag: 11.6
    pullPolicy: IfNotPresent
  enabled: true
  primary:
    persistence:
      enabled: true
      existingClaim: nextcloud-postgresql-pvc
    extraVolumes:
      - name: backup
        persistentVolumeClaim:
          claimName: nextcloud-backup-pvc
    extraVolumeMounts:
      - name: backup
        mountPath: /backup

service:
  type: LoadBalancer
  port: 8080
  loadBalancerIP: ${ip_address}

persistence:
  enabled: true
  storageClass: "-"
  existingClaim: nextcloud-server-pvc
  accessMode: ReadWriteOnce
  size: 8Gi

livenessProbe:
  initialDelaySeconds: 240

readinessProbe:
  initialDelaySeconds: 240
jessebot commented 1 year ago

@steled, at first glance, everything seems okay. Could you clarify:

For my service config, I'm using this, and I'm using the nginx-ingress-controller, but I'm using metallb for actually getting IPs from a configured pool:

  service:
    type: ClusterIP
    port: 8080

There is a bit of info in the readme about preserving source IP, but I'm unsure if this is related. Off the top of my head, it looks like it wants to use the k8s internal IPs rather than the node IPs, but I'd need to look more into this and test it to be sure.

@provokateurin or @tvories have either of you seen this with apache? I am using nginx for my setup, so my experience is more geered towards that web server.

steled commented 1 year ago

@jessebot:

I'm also using metallb.

MohammedNoureldin commented 7 months ago

Just as documentation, I had the exact same issue unexpectedly. Manipulating the configs of Nextcloud did not help. I just removed the whole helm with PVCs and redeployed and it works again.

Of course, deleting everything is not an option in production, just maybe something went wrong during my development in the data in the database. That is why redeploying helped.

kahirokunn commented 6 months ago

I got the same error.