Closed jnamdar closed 7 months ago
Values for my external PGSQL (this is then passed to a "Cluster" CRD provided by the https://cloudnative-pg.io/ operator) :
instances: 2
nodeSelector:
worker: core
initdb:
enabled: true
appUserPassword: {{ .StateValues.Global.password.mailu }}
database: mailu
owner: app
postInitSQL:
- CREATE DATABASE roundcube
- GRANT ALL PRIVILEGES ON DATABASE "roundcube" TO "app"
- ALTER DATABASE "roundcube" OWNER TO "app"
- ALTER SCHEMA public OWNER TO "app"
This does create a PGSQL cluster with 2 databases, roundcube and mailu with the "app" owner, whose credentials are stored in a secret I then use in mailu values below (ignore the {{}} template syntax, we render those at deploy time) :
I found the culprit, when I exec in the admin Pod before it restarts, and I launch python3 /start.py
THEN I interrupt it once (Ctrl+C), I see an error about the PGSQL connection, and a malformed PGSQL connection string.
Basically I have a special character in my password (an @
) and I guess it's messing with the connection string construction! Which may be another issue in itself? When I remove this character I can go further with my deployment.
I still find it weird that I couldn't see those error logs when I use kubectl logs
on the Pod, it should definitely be there
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
Describe the bug The mailu-admin Pod keeps restarting on deployment due to the liveness and readiness probes failing the health check.
Environment
Additional context The Pod doesn't log anything, no error code or any information. I tried increasing the probes timeouts from 10s to 60s, to no effect.
I am using the latest chart version (1.5.0), and did not change the appVersion, which would be 2.0.30 by default IIRC.
I am letting the mailu chart deploys its own Redis cluster. I am providing the pgsql cluster though, and thus using the externalDatabase config section. All others Pods seem fine, they are running and logging what you would expect at this point (various daemons are started such as postfix, dovecot). The only other Pod in the same state is rspamd (from memory, not 100% sure) which does log a nominal message, indicating that it's waiting for the admin Pod to start.
I will post my complete values file once I have more time. In the meantime is there anything I can do to make the Pod log something and find out why the probes are failing ? I tried setting the logLevel to DEBUG in the admin section of the values, but I have nothing more.