Closed kindrowboat closed 2 years ago
What are the logs from the pod telling you?
kubectl logs -n default deployment/nextcloud
You might try setting startupProbe
to enabled while bumping up its timeouts or disable all three probes in the case that it is just not having enough time to initialize on the first run before the probes bounce the pod and start all over again. This appears to be a common issue folks are running into based on the issues logged here. This is a good comment on how to work around a slow init: https://github.com/nextcloud/helm/issues/259#issuecomment-1203159235
And on the note of ingress, it is not involved with the probes as configured. The probes are pointing to the internal names. Once you have the pod up and stable, you can think about troubleshooting ingress if you still can't connect.
Hey, thank you so much @StephenLasseter for responding. Setting startupProbe.enable
to true
and startupProbe.failureThreshold
to 60
did the trick. I wonder if something like that should be in the provided values.yml
.
I'm attempting to run nextcloud using this helm chart on a self-hosted microk8s cluster. I have an external loadbalancer (running Caddy) pointing the desired nextcloud URL at the cluster. After I "helm install", when I try to access nextcloud through the load balancer, I get a 503. When I look at the status of the nextcloud pod, I see that it never became healthy and the health check is getting a "connect: connection refused". I imagine that I somehow have my ingress or service options set up incorrectly, but I'm not sure. Any help is appreciated.
helm install output
kubectl describe output
values.yml
Most of these values are the defaults with exception of enabling ingress, setting the public URL, and enabling persistent storage.