bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9.02k stars 9.22k forks source link

[bitnami/sonarqube] deployment restart not possible #13865

Closed pyromaniac3010 closed 1 year ago

pyromaniac3010 commented 1 year ago

Name and Version

bitnami/sonarqube 2.0.3

What steps will reproduce the bug?

I setup a sonarqube environment. According to https://github.com/bitnami/charts/tree/main/bitnami/sonarqube#prerequisites a ReadWriteMany volume should be used for persistence. After sonarqube got up an running, I triggered a kubectl rollout restart deployment sonarqube This led to a new pod beeing created (as it is a deployment, not a statefulset). The new pod failed to come up with the following logs:

sonarqube 14:32:32.98 
sonarqube 14:32:32.98 Welcome to the Bitnami sonarqube container
sonarqube 14:32:32.98 Subscribe to project updates by watching https://github.com/bitnami/containers
sonarqube 14:32:32.99 Submit issues and feature requests at https://github.com/bitnami/containers/issues
sonarqube 14:32:32.99 
sonarqube 14:32:32.99 INFO  ==> Validating settings in POSTGRESQL_CLIENT_* env vars
sonarqube 14:32:33.02 INFO  ==> Creating SonarQube configuration
sonarqube 14:32:33.06 INFO  ==> Trying to connect to the database server
sonarqube 14:32:33.16 INFO  ==> Restoring persisted SonarQube installation
sonarqube 14:32:33.18 INFO  ==> Setting heap size to -Xmx2048m -Xms1024m
sonarqube 14:32:33.20 INFO  ==> ** SonarQube setup finished! **

sonarqube 14:32:33.22 INFO  ==> ** Starting SonarQube **
/opt/bitnami/java/bin/java
Running SonarQube...
2022.12.07 14:32:34 INFO  app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/bitnami/sonarqube/temp
2022.12.07 14:32:34 INFO  app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:38839]
2022.12.07 14:32:34 INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[ELASTICSEARCH] from [/opt/bitnami/sonarqube/elasticsearch]: /opt/bitnami/sonarqube/elasticsearch/bin/elasticsearch
2022.12.07 14:32:34 INFO  app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2022.12.07 14:32:41 INFO  es[][o.e.n.Node] version[7.17.5], pid[145], build[default/tar/8d61b4f7ddf931f219e3745f295ed2bbc50c8e84/2022-06-23T21:57:28.736740635Z], OS[Linux/5.10.135/amd64], JVM[BellSoft/OpenJDK 64-Bit Server VM/11.0.17/11.0.17+7-LTS]
2022.12.07 14:32:41 INFO  es[][o.e.n.Node] JVM home [/opt/bitnami/java]
2022.12.07 14:32:41 INFO  es[][o.e.n.Node] JVM arguments [-XX:+UseG1GC, -Djava.io.tmpdir=/opt/bitnami/sonarqube/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djna.tmpdir=/opt/bitnami/sonarqube/temp, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=COMPAT, -Dcom.redhat.fips=false, -Des.enforce.bootstrap.checks=true, -Xmx2048m, -Xms2048m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/bitnami/sonarqube/elasticsearch, -Des.path.conf=/opt/bitnami/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false]
2022.12.07 14:32:42 INFO  es[][o.e.p.PluginsService] loaded module [analysis-common]
2022.12.07 14:32:42 INFO  es[][o.e.p.PluginsService] loaded module [lang-painless]
2022.12.07 14:32:42 INFO  es[][o.e.p.PluginsService] loaded module [parent-join]
2022.12.07 14:32:42 INFO  es[][o.e.p.PluginsService] loaded module [reindex]
2022.12.07 14:32:42 INFO  es[][o.e.p.PluginsService] loaded module [transport-netty4]
2022.12.07 14:32:42 INFO  es[][o.e.p.PluginsService] no plugins loaded
2022.12.07 14:32:42 ERROR es[][o.e.b.ElasticsearchUncaughtExceptionHandler] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/opt/bitnami/sonarqube/data/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
    at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:173) ~[elasticsearch-7.17.5.jar:7.17.5]
    at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160) ~[elasticsearch-7.17.5.jar:7.17.5]
    at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77) ~[elasticsearch-7.17.5.jar:7.17.5]
    at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112) ~[elasticsearch-cli-7.17.5.jar:7.17.5]
    at org.elasticsearch.cli.Command.main(Command.java:77) ~[elasticsearch-cli-7.17.5.jar:7.17.5]
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125) ~[elasticsearch-7.17.5.jar:7.17.5]
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) ~[elasticsearch-7.17.5.jar:7.17.5]
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/opt/bitnami/sonarqube/data/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
    at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:328) ~[elasticsearch-7.17.5.jar:7.17.5]
    at org.elasticsearch.node.Node.<init>(Node.java:429) ~[elasticsearch-7.17.5.jar:7.17.5]
    at org.elasticsearch.node.Node.<init>(Node.java:309) ~[elasticsearch-7.17.5.jar:7.17.5]
    at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:234) ~[elasticsearch-7.17.5.jar:7.17.5]
    at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234) ~[elasticsearch-7.17.5.jar:7.17.5]
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434) ~[elasticsearch-7.17.5.jar:7.17.5]
    at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169) ~[elasticsearch-7.17.5.jar:7.17.5]
    ... 6 more
uncaught exception in thread [main]
java.lang.IllegalStateException: failed to obtain node locks, tried [[/opt/bitnami/sonarqube/data/es7]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
    at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:328)
    at org.elasticsearch.node.Node.<init>(Node.java:429)
    at org.elasticsearch.node.Node.<init>(Node.java:309)
    at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:234)
    at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:234)
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:434)
    at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:169)
    at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:160)
    at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:77)
    at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112)
    at org.elasticsearch.cli.Command.main(Command.java:77)
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:125)
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)
For complete error details, refer to the log at /opt/bitnami/sonarqube/logs/sonarqube.log
2022.12.07 14:32:42 WARN  app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [ElasticSearch]: 1
2022.12.07 14:32:42 INFO  app[][o.s.a.SchedulerImpl] Process[ElasticSearch] is stopped
2022.12.07 14:32:42 INFO  app[][o.s.a.SchedulerImpl] SonarQube is stopped

Because the new pod never turns "green", it will continue to crashbackloop forever and the old pod will never get killed. The same things happen, if you just scale it by kubectl scale deployment sonarqube --replicas=2 The only way to recover from that state is to kubectl scale deployment sonarqube --replicas=0, wait till all pods got shutdown and then fire up one replica again: kubectl scale deployment sonarqube --replicas=1

So the current deployment solution with ReadWriteMany filesystem as mentioned in https://github.com/bitnami/charts/tree/main/bitnami/sonarqube#prerequisites leads to a not working solution.

Are you using any custom parameters or values?

persistence:
  storageClass: efs-sonarqube
  accessModes:
    - ReadWriteMany
  size: 10Gi

What is the expected behavior?

Sonarqube should be able to scale to more than one running pod.

What do you see instead?

Any second pod started results in an endless crashbackloop.

Additional information

No response

pyromaniac3010 commented 1 year ago

As in https://docs.sonarqube.org/latest/setup-and-upgrade/deploy-on-kubernetes/deploy-sonarqube-on-kubernetes/#helm-chart-specifics stated, the persistency is only used for elasticsearch to speed up the index generation and not really needed. I would recommend setting persistence.enabled: false in values.yaml and fix the documentation. As with the defaults set as is, a container restart or scaling is not possible at all.

corico44 commented 1 year ago

So I understand that you propose to change the default value of persistence.enable to false. What exactly do you mean by modifying the documentation?

pyromaniac3010 commented 1 year ago

@corico44 https://github.com/bitnami/charts/blob/main/bitnami/sonarqube/README.md currently has this:

Prerequisites Kubernetes 1.19+ Helm 3.2.0+ PV provisioner support in the underlying infrastructure ReadWriteMany volumes for deployment scaling

The last two "Prerequisites" are not valid. Sonarqube does neither require that, nor does a deployment work, if you fulfill these prerequisites.

pyromaniac3010 commented 1 year ago

@corico44 @jotamartos This PR disables the persistence for the postgresql sub-chart. postgresql.persistence.enabled Required was to disable persistence for the sonarqube deployment: persistence.enabled This will break a lot for existing installations if they do not use an external postgresql server. Please fix asap!

carrodher commented 1 year ago

Thanks for letting us know, we will look into the issue

corico44 commented 1 year ago

The error has been fixed but there seem to be some bugs in changing the correct persistence.enabled value. We will review it and inform you of any new developments.

corico44 commented 1 year ago

The changes have been made successfully. Thank you very much for opening this issue @pyromaniac3010!