Closed pyromaniac3010 closed 1 year ago
As in https://docs.sonarqube.org/latest/setup-and-upgrade/deploy-on-kubernetes/deploy-sonarqube-on-kubernetes/#helm-chart-specifics stated, the persistency is only used for elasticsearch to speed up the index generation and not really needed. I would recommend setting persistence.enabled: false in values.yaml and fix the documentation. As with the defaults set as is, a container restart or scaling is not possible at all.
So I understand that you propose to change the default value of persistence.enable to false. What exactly do you mean by modifying the documentation?
@corico44 https://github.com/bitnami/charts/blob/main/bitnami/sonarqube/README.md currently has this:
Prerequisites Kubernetes 1.19+ Helm 3.2.0+ PV provisioner support in the underlying infrastructure ReadWriteMany volumes for deployment scaling
The last two "Prerequisites" are not valid. Sonarqube does neither require that, nor does a deployment work, if you fulfill these prerequisites.
@corico44 @jotamartos This PR disables the persistence for the postgresql sub-chart. postgresql.persistence.enabled Required was to disable persistence for the sonarqube deployment: persistence.enabled This will break a lot for existing installations if they do not use an external postgresql server. Please fix asap!
Thanks for letting us know, we will look into the issue
The error has been fixed but there seem to be some bugs in changing the correct persistence.enabled
value. We will review it and inform you of any new developments.
The changes have been made successfully. Thank you very much for opening this issue @pyromaniac3010!
Name and Version
bitnami/sonarqube 2.0.3
What steps will reproduce the bug?
I setup a sonarqube environment. According to https://github.com/bitnami/charts/tree/main/bitnami/sonarqube#prerequisites a ReadWriteMany volume should be used for persistence. After sonarqube got up an running, I triggered a
kubectl rollout restart deployment sonarqube
This led to a new pod beeing created (as it is a deployment, not a statefulset). The new pod failed to come up with the following logs:Because the new pod never turns "green", it will continue to crashbackloop forever and the old pod will never get killed. The same things happen, if you just scale it by
kubectl scale deployment sonarqube --replicas=2
The only way to recover from that state is tokubectl scale deployment sonarqube --replicas=0
, wait till all pods got shutdown and then fire up one replica again:kubectl scale deployment sonarqube --replicas=1
So the current deployment solution with ReadWriteMany filesystem as mentioned in https://github.com/bitnami/charts/tree/main/bitnami/sonarqube#prerequisites leads to a not working solution.
Are you using any custom parameters or values?
What is the expected behavior?
Sonarqube should be able to scale to more than one running pod.
What do you see instead?
Any second pod started results in an endless crashbackloop.
Additional information
No response