Closed jr-dimedis closed 9 months ago
Hi @jr-dimedis
I can partly reproduce your issue locally. I created a PR that will address the error message regarding the timeout for KongV1beta1KongUpstreamPolicy
resources. Once it is merged, you should not longer experience it.
On the other hand, Kong pods should pass the Waiting for database connection to succeed
state and create the necessary db and schemas. Maybe the postgres db pods do not reach a ready state in your cluster for some reason. Could you please list the running pods and the logs as shown below?
$ kubectl get pods -n kong
NAME READY STATUS RESTARTS AGE
kong-6496c57cc6-mszwt 2/2 Running 0 52m
kong-6496c57cc6-qxvpn 2/2 Running 0 52m
kong-postgresql-0 1/1 Running 0 52m
$ kubectl logs kong-postgresql-0 -n kong
postgresql 16:51:16.22 INFO ==>
postgresql 16:51:16.22 INFO ==> Welcome to the Bitnami postgresql container
postgresql 16:51:16.23 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers
postgresql 16:51:16.23 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
postgresql 16:51:16.23 INFO ==>
postgresql 16:51:16.25 INFO ==> ** Starting PostgreSQL setup **
postgresql 16:51:16.32 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql 16:51:16.32 INFO ==> Loading custom pre-init scripts...
postgresql 16:51:16.33 INFO ==> Initializing PostgreSQL database...
postgresql 16:51:16.35 INFO ==> pg_hba.conf file not detected. Generating it...
postgresql 16:51:16.35 INFO ==> Generating local authentication configuration
postgresql 16:51:17.45 INFO ==> Starting PostgreSQL in background...
postgresql 16:51:17.75 INFO ==> Changing password of postgres
postgresql 16:51:17.76 INFO ==> Creating user kong
postgresql 16:51:17.82 INFO ==> Granting access to "kong" to the database "kong"
postgresql 16:51:17.85 INFO ==> Setting ownership for the 'public' schema database "kong" to "kong"
postgresql 16:51:17.92 INFO ==> Configuring replication parameters
postgresql 16:51:17.95 INFO ==> Configuring synchronous_replication
postgresql 16:51:17.95 INFO ==> Configuring fsync
postgresql 16:51:18.05 INFO ==> Stopping PostgreSQL...
waiting for server to shut down.... done
server stopped
postgresql 16:51:18.16 INFO ==> Loading custom scripts...
postgresql 16:51:18.16 INFO ==> Enabling remote connections
postgresql 16:51:18.22 INFO ==> ** PostgreSQL setup finished! **
postgresql 16:51:18.24 INFO ==> ** Starting PostgreSQL **
2023-12-21 16:51:18.253 GMT [1] LOG: pgaudit extension initialized
2023-12-21 16:51:18.257 GMT [1] LOG: starting PostgreSQL 14.10 on aarch64-unknown-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2023-12-21 16:51:18.258 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2023-12-21 16:51:18.258 GMT [1] LOG: listening on IPv6 address "::", port 5432
2023-12-21 16:51:18.259 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2023-12-21 16:51:18.262 GMT [150] LOG: database system was shut down at 2023-12-21 16:51:18 GMT
2023-12-21 16:51:18.320 GMT [1] LOG: database system is ready to accept connections
Regarding this comment:
I have to specify the password twice? (looks like a bug to me)
It is not a bug 😄. password
will hold the password for the non-privileged db user (in this case, a user named kong
is created) and postgres-password
the credentials for the postgres
user (a user that is privileged). It is recommended that you set different values for them.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Hi @joancafom,
sorry for the delay, I was two weeks off.
Thanks for the clarification regarding the two different passwords.
I'll try to reproduce the issue and provide the requested log output. Although I think the postgresql pod was ready because the manually started migration job did its job successfully.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Hi, I'm encountering the same issue with Kong migrate-job not dispatching.
Quick fixes is to comment out Helm chart values. My theory is that since PSQL is ready, 2 kong replica sets keep waiting for migrate-job, but migrate-job is waiting for helm hook.
# annotations:
# helm.sh/hook: post-install, pre-upgrade
# helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
Name and Version
bitnami/kong 10.1.3
What architecture are you using?
amd64
What steps will reproduce the bug?
Deploy the helm chart with the values and secrets I provided below.
I am doing this with a K8s cluster on the Hetzner (www.hetzner.de) platform.
Are you using any custom parameters or values?
values.yaml
:Secret created with Terraform:
What is the expected behavior?
Automatic deployment with a bootstrapped PostgreSQL database.
What do you see instead?
Kong pods do not start and are stuck with this message:
Waiting for database connection to succeed
Additional information
I could bootstrap the database on my own by starting a shell in a kong pod and executing this:
Then the pods do start. As well I could see that a migration job was started, which probably did nothing on this fresh database. Unfortunately I do not have any logs from it, but it succeeded.
Kong Ingress Controller pods do start as well and it's working with a simple Hello World app + corresponding Ingress configuration.
But the controller pods go into a crash restart loop with this error message:
I don't know whether this is related to the bootstrapping problem or another issue.