Closed deepaksood619 closed 5 years ago
I have same problem. Upgrade does not work. No matter if I use exactly same values.yaml - STATUS: FAILED with --dry-run --debug
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
@deepaksood619 Yes we are experiencing the same problem on updating/upgrading the helm app.
Actually I tried to switch from NodePort to ClusterIP and wanted to upgrade but the upgrade hangs with this error:
Run with --v (verbose) or --vv (debug) for more details
We are using Helm Char Kong version 0.10.2
Kubectl version:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7", GitCommit:"6f482974b76db3f1e0f5d24605a9d1d38fad9a2b", GitTreeState:"clean", BuildDate:"2019-03-25T02:52:13Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}```
Update: Also with Helm Chart Kong version 0.10.3
we experience the same issue still.
The root of this problem is here: https://github.com/helm/charts/issues/5167
You can solve this problem only if you specify a postgres password yourself as part of your installation or upgrade, otherwise every time you perform an upgrade, a new password will be generated and Kong will not be able to talk to psql anymore.
This is really a limitation in Helm and not Kong's Helm chart.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Are there any news on this? How do you work with this limitation? Can I set my password somewhere in plain text to get upgrades working somehow? Is there another way to work around this issue?
We cannot use the helm charts like this.
@rolandg You can set the password (and other values for the postgresql
dependency) in your values file:
postgresql:
postgresqlPassword: K6W9qBFzTQPj8xQiSouA
Thanks @wrdls. I tried this before but I used env.pg_password
. I'm not sure how this is different.
Anyway. I found out that if you setup your own postgres and disable the one from the kong charts then upgrades are possible.
Hi.
I'm having this issue even when running for example:
helm install stable/kong --name kong --namespace api-gateway --set env.pg_database=kong --set env.pg_user=kong --set env.pg_password=0000000 --set postgresql.postgresqlPassword=0000000
Still, in the end, I keep getting the following:
2019-06-18 09:22:41.205 GMT [1635] FATAL: password authentication failed for user "kong"
2019-06-18 09:22:41.205 GMT [1635] DETAIL: Password does not match for user "kong".
Connection matched pg_hba.conf line 95: "host all all 0.0.0.0/0 md5"
Run with --v (verbose) or --vv (debug) for more details
waiting for db
Error: [PostgreSQL error] failed to retrieve server_version_num: FATAL: password authentication failed for user "kong
Any ideas?
@ammggm I needed to disable postgres with --set postgresql.enabled=false
here are my answers:
---
image:
tag: "latest"
proxy:
http:
hostPort: "80"
tls:
hostPort: "443"
type: "ClusterIP"
admin:
type: "ClusterIP"
useTLS: true
postgresql:
enabled: false
env:
pg_password: "00000000"
pg_host: "kong-psql.kong.svc.cluster.local"
pg_database: "kong"
pg_user: "kong"
pg_port: 5432
nodeSelector:
deploy/kong-proxy: "true"
replicaCount: "2"
But in that case it will not create the pod for the psql, what I don't seem to see working is that the user and pass I'm setting are just ignored.
So, in the end, managed to have it running if I don't give a --name during the helm install command.
Kinda strange for me that if you don't keep a random-name the all helm install will fail.
So what is the definitive fix when "--name" is specified during the helm install?
The error shouldn't really happen if --name
is specified.
A potential solution that can work here is creating a secret in k8s for postgres password and then passing it to Kong and postgresql helm chart (via postgresql.postgresqlPassword param)
Attempted to set both the env.pg_password and postgresql.postgresqlPassword in the values.yml / or via --set and still getting the same failure when attempting to specific --name for the helm release
# Specify Kong configurations
# Kong configurations guide https://getkong.org/docs/latest/configuration/
env:
database: postgres
proxy_access_log: /dev/stdout
admin_access_log: /dev/stdout
admin_gui_access_log: /dev/stdout
portal_api_access_log: /dev/stdout
proxy_error_log: /dev/stderr
admin_error_log: /dev/stderr
admin_gui_error_log: /dev/stderr
portal_api_error_log: /dev/stderr
pg_database: kong
pg_user: kong
pg_password: kong
---
# PostgreSQL chart configs
postgresql:
enabled: true
postgresqlUsername: kong
postgresqlDatabase: kong
postgresqlPassword: kong
service:
port: 5432
Error:
k logs pod/kong-kong-init-migrations-db4zn
Error: [PostgreSQL error] failed to retrieve server_version_num: FATAL: password authentication failed for user "kong"
Run with --v (verbose) or --vv (debug) for more details
When I look at the deployment, I see this:
env:
- name: KONG_PG_HOST
value: kong-postgresql
- name: KONG_PG_PORT
value: "5432"
- name: KONG_PG_PASSWORD
valueFrom:
secretKeyRef:
key: postgresql-password
name: kong-postgresql
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_GUI_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_GUI_ERROR_LOG
value: /dev/stderr
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_DATABASE
value: kong
- name: KONG_PG_HOST
value: kong-postgresql.default.svc.cluster.local
- name: KONG_PG_PASSWORD
value: kong
- name: KONG_PG_USER
value: kong
- name: KONG_PORTAL_API_ACCESS_LOG
value: /dev/stdout
- name: KONG_PORTAL_API_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_ACCESS_LOG
value: /dev/stdout
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
^^ KONG_PG_PASSWORD shows up twice
When I look at the Secret, I see this:
k get secret/kong-postgresql -o yaml
apiVersion: v1
data:
postgresql-password: ??????????
kind: Secret
metadata:
creationTimestamp: 2019-07-01T17:36:34Z
labels:
app: postgresql
chart: postgresql-3.9.5
heritage: Tiller
release: kong
name: kong-postgresql
namespace: default
resourceVersion: "44221402"
selfLink: /api/v1/namespaces/default/secrets/kong-postgresql
uid: c526cec3-9c26-11e9-88ba-02756a0476ab
type: Opaque
^^ Password does not match
Looks like it is pulling this from various YAML files in the ~/templates directory:
- name: KONG_PG_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "kong.postgresql.fullname" . }}
key: postgresql-password
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Upgrading Helm release causes reroll of the postgres-password in the
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
I am facing the same issue.
Same issue after I installed with kong-operator.
/cc @gyliu513 @zxdiscovery
Workaround the issue by the following paramaters:
# workaround for https://github.com/helm/charts/issues/12575
env:
pg_password: password
database: postgres
postgresql:
enabled: true
postgresqlUsername: kong
postgresqlPassword: password
postgresqlDatabase: kong
service:
port: 5432
Same issue here. I resolved it by rolling back to previous release using helm and getting the old password out. Then i upgraded my helm kong release again with this password. And this worked. I used kong helm chart from bitnami and here is the solution mentioned in there articles https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/
Once i have my old password with me, i ran the following command and used the same password helm upgrade kong --set service.exposeAdmin=true --set service.type=LoadBalancer,postgresql.postgresqlPassword=dNZWTMNPvz bitnami/kong
Is this a request for help?: Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
Which chart: stable/kong
What happened: After helm upgrade init container has errors
What you expected to happen: Init container should have finished and kong started
How to reproduce it (as minimally and precisely as possible): Changed the kong image tag to
1.0.3
from1.0.2
in values.yaml. After this ranhelm upgrade
commandAnything else we need to know: After purge delete and deploy again it works seamlessly. Same problem if any other update to values.yaml. Helm upgrade never seems to work.