Closed jc-lab closed 4 years ago
Hi @jc-lab ,
I recommend you to quote the passwords in the values.yaml
file, if not, as you are using a number it will be interpreted as an Integer instead of a String and it could fail rendering the secret template.
Regarding the issue, I copied your values, deleted the storage class, and used the same passwords but converting them into strings and the chart is running as expected for me. Be sure to delete the PVCs once you uninstall the release because they are not deleted automatically and they can contain an old password, then try to install it again. You commented the PVCs are new, but could you confirm you deleted all the resources related to the previous release and then re-deploy?
Regarding the second error, it seems to be a timeout so probably 30 seconds is too much in your case. Could you share wich k8s cluster are you using?
@miguelaeh Passwords are not really numbers, they are different letters. It's just written randomly to hide.
I am using rook-ceph.
Can a long timeout time be a problem? rather than a short one? Is there any way to succeed while keeping 30 seconds timeout time?
Can you description why does PVC causes can problems? This is just curious. As far as I know, pod won't run until PVC is initialized (formatted).
Hi @jc-lab , When you deploy the Chart the first time, the container is configured and it persists its configuration into the PV (the PVC provides that PV). When you redeploy the chart, or install a new one without deleting the PVCs, the container will detect data mounted inside the PVC and it will skip the configuration process, so it will be configured with the persisted data. In this case, the password will remain the same as in the previous deployment even if you specify a different password, because the application will not be configured with the new password, but with the persisted one.
Regarding the timeout, I wanted to say that it is not enough, sorry for the confusion. You can see here the explanation about it https://github.com/bitnami/bitnami-docker-mariadb-galera/#slow-filesystems.
@miguelaeh Thanks for reply.
In my case, it same both.
But after changing the timeout time and trying a few times, it worked!
You were helpful. Thank you.
@jc-lab,
I am facing same issue with mariadb-galera.
Two of my nodes got down, i have made them up.However facing issue with mariadb-galera
[Note] WSREP: gcomm: connecting to group 'galera', peer 'mariadb-galera-headless:'
2020-12-22 8:19:39 0 [ERROR] WSREP: failed to open gcomm backend connection: 131: No address to connect (FATAL)
at gcomm/src/gmcast.cpp:connect_precheck():311
2020-12-22 8:19:39 0 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():220: Failed to open backend connection: -131 (State not recoverable)
2020-12-22 8:19:39 0 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1632: Failed to open channel 'galera' at 'gcomm://mariadb-galera-headless': -131 (State not recoverable)
2020-12-22 8:19:39 0 [ERROR] WSREP: gcs connect failed: State not recoverable
2020-12-22 8:19:39 0 [ERROR] WSREP: wsrep::connect(gcomm://mariadb-galera-headless) failed: 7
2020-12-22 8:19:39 0 [ERROR] Aborting
Warning: Memory not freed: 48
How did you resolved that, what steps did you follow?
Which chart: mariadb-galera-2.1.4
Describe the bug Access denied error on access by livenessprobe right after installation with helm.
To Reproduce
helm install --namespace test --name test-db-1 bitnami/mariadb-galera --values values.yaml
Expected behavior A clear and concise description of what you expected to happen.
Version of Helm and Kubernetes:
Log:
Other Issues
1788
Maybe same issue. PVC is new, so it has nothing to do with upgrade.
2013
If set MARIADB_INIT_SLEEP_TIME to 30 seconds, the following error occurs.