Closed asarkar closed 3 years ago
It is not something expected, in order to try to reproduce the issue or figure out what can be happening, can you provide the values/parameters you are using for installing the chart (if any)?
I am trying to reproduce the issue without extra parameters and everything is working fine:
$ helm install cassandra bitnami/cassandra
NAME: cassandra
LAST DEPLOYED: Thu Jan 28 12:52:12 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
Cassandra can be accessed through the following URLs from within the cluster:
- CQL: cassandra.default.svc.cluster.local:9042
- Thrift: cassandra.default.svc.cluster.local:9160
To get your password run:
export CASSANDRA_PASSWORD=$(kubectl get secret --namespace default cassandra -o jsonpath="{.data.cassandra-password}" | base64 --decode)
Check the cluster status by running:
kubectl exec -it --namespace default $(kubectl get pods --namespace default -l app=cassandra,release=cassandra -o jsonpath='{.items[0].metadata.name}') nodetool status
To connect to your Cassandra cluster using CQL:
1. Run a Cassandra pod that you can use as a client:
kubectl run --namespace default cassandra-client --rm --tty -i --restart='Never' \
--env CASSANDRA_PASSWORD=$CASSANDRA_PASSWORD \
\
--image docker.io/bitnami/cassandra:3.11.9-debian-10-r52 -- bash
2. Connect using the cqlsh client:
cqlsh -u cassandra -p $CASSANDRA_PASSWORD cassandra
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/cassandra 9042:9042 &
cqlsh -u cassandra -p $CASSANDRA_PASSWORD 127.0.0.1 9042
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cassandra default 1 2021-01-28 12:52:12.725963357 +0000 UTC deployed cassandra-7.3.0 3.11.9
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 3m28s
$ kubectl logs -f cassandra-0
Setting node as password seeder
cassandra 12:52:40.27
cassandra 12:52:40.27 Welcome to the Bitnami cassandra container
cassandra 12:52:40.27 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-cassandra
cassandra 12:52:40.27 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-cassandra/issues
cassandra 12:52:40.27
cassandra 12:52:40.28 INFO ==> ** Starting Cassandra setup **
cassandra 12:52:40.31 INFO ==> Validating settings in CASSANDRA_* env vars..
cassandra 12:52:40.39 INFO ==> Initializing Cassandra database...
cassandra 12:52:40.76 INFO ==> Deploying Cassandra from scratch
cassandra 12:52:40.76 INFO ==> Starting Cassandra
cassandra 12:52:40.77 INFO ==> Checking that it started up correctly
cassandra 12:53:05.97 INFO ==> Found CQL startup log line
cassandra 12:53:08.81 INFO ==> Nodetool reported the successful startup of Cassandra
cassandra 12:53:08.81 INFO ==> Password seeder node
cassandra 12:53:08.82 INFO ==> Trying to access CQL server @ cassandra-0.cassandra-headless.default.svc.cluster.local
cassandra 12:53:10.02 INFO ==> Accessed CQL server successfully
cassandra 12:53:10.02 INFO ==> Updating the password for the "cassandra" user...
cassandra 12:53:11.15 INFO ==> Trying to access CQL server @ cassandra-0
cassandra 12:53:12.07 INFO ==> Accessed CQL server successfully
cassandra 12:53:12.07 INFO ==> Password updated successfully
cassandra 12:53:12.09 INFO ==> ** Cassandra setup finished! **
cassandra 12:53:12.14 INFO ==> ** Starting Cassandra **
cassandra 12:53:12.15 INFO ==> Cassandra already running with PID 185 because of the intial cluster setup
...
INFO [main] 2021-01-28 12:53:01,515 CassandraDaemon.java:650 - Startup complete
INFO [Thread-2] 2021-01-28 12:53:01,515 ThriftServer.java:133 - Listening for thrift clients...
INFO [OptionalTasks:1] 2021-01-28 12:53:02,959 CassandraRoleManager.java:372 - Created default superuser role 'cassandra'
INFO [Native-Transport-Requests-1] 2021-01-28 12:53:09,426 AuthCache.java:177 - (Re)initializing PermissionsCache (validity period/update interval/max entries) (2000/2000/1000)
INFO [Native-Transport-Requests-1] 2021-01-28 12:53:10,834 AuthCache.java:177 - (Re)initializing RolesCache (validity period/update interval/max entries) (2000/2000/1000)
Here's the values.yaml (renamed to make GitHub happy). You can do a comparison with the default to see the differences. values.txt
I got it working. Seems like the issue was finding a balance between the heap sizes and resource limits. values.txt
@carrodher I might have closed the ticket prematurely, so I’m reopening it. I think the cryptic error can and should be improved to give a better indication of what is wrong. What’s important is that if there’s a problem with the heap size and k8s resource limits, it is clearly called out. It took me several hours of trial and error to find a sweet spot, and it should’ve been much easier.
Can you provide more info about what are the resources in your cluster and what are the resource you were setting when the issue appeared and the ones set once solved. If it is something common we can add a section in the README, but if it is something very specific for a certain use case or environment I don't know how it can be documented in a generic way.
All the parameters are customizable and listed in the README, in this case maybe it is something that should be referred to the Cassandra documentation where the requirements are listed, but not sure if it is something we can handle for all the scenarios from the chart POV.
The difference seems to come from the memory settings, not what else is running in the cluster. Have you tried running with the values.yaml that didn't work for me?
Bad:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 2
memory: 4Gi
Good:
limits:
cpu: 2
memory: 3Gi
requests:
cpu: 2
memory: 2Gi
maxHeapSize: 2G
newHeapSize: 1G
There are some other minor differences between the two files, but in the first case, heap sizes aren't specified.
By default the resources are not set:
## Init container' resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases the chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 100m
# memory: 128Mi
requests: {}
# cpu: 100m
# memory: 128Mi
Customizing those parameters is something users need to do on their own following the k8s guidelines adapting the values to their environment requirements and the application itself.
If you request, for example, 2 cpus and 8GB of memory but your cluster nodes doesn't have enough resources you will have an error like the following one when describing the pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4s (x2 over 4s) default-scheduler 0/7 nodes are available: 7 Insufficient cpu, 7 Insufficient memory.
Normal NotTriggerScaleUp 2s cluster-autoscaler pod didn't trigger scale-up: 1 Insufficient cpu
That is why by default in the chart those limits/requests are not set beforehand since it is something very specific for each use case and we can't set default values for that, beyond enabling the option that the user can customize it
In my case, the cluster has enough resources for the limits in both cases. What I’ve been saying all along is that the problem is finding a balance between the heap size parameters and the resource parameters.
Can you do some validation at runtime that the heap size is within the resource limits and meet the Cassandra min memory requirement?
Without setting any value for those parameters they should be automatically calculated by Cassandra, see the comment in the values:
## Memory settings: These are calculated automatically unless specified otherwise
## To run on environments with little resources (<= 8GB), tune your heap settings:
## maxHeapSize:
## - calculate 1/2 ram and cap to 1024MB
## - calculate 1/4 ram and cap to 8192MB
## - pick the max
## newHeapSize:
## A good guideline is 100 MB per CPU core.
## - min(100 * num_cores, 1/4 * heap size)
## ref: https://docs.datastax.com/en/archived/cassandra/2.0/cassandra/operations/ops_tune_jvm_c.html
##
# maxHeapSize: 4G
# newHeapSize: 800M
Those are guidelines recommended by the Cassandra developers (link), but at the end it is something configurable that depends on the application itself and should be customized according to the cluster settings and every use case needs.
Doing the maths for the above requests
values, the maxHeapSize
and newHeapSize
values should be something like (pseudo-code):
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 2
memory: 4Gi
maxHeapSize: max(1GB, 1GB) -> 1GB
newHeapSize: min(200MB,250MB) -> 200MB
limits:
cpu: 2
memory: 3Gi
requests:
cpu: 2
memory: 2Gi
maxHeapSize: max(1GB, 500MB) -> 1GB
newHeapSize: min(200MB,250MB) -> 200MB
Nodes Requests and Limits can be checked with kubectl describe nodes
and based on that, in addition to the application needs, users can customize the different values only if there is a reason for that (like a performance degradation, specific needs for their use case, or high memory consumption).
About creating some validations when deploying the Helm Chart we are open to contributions and we will be happy to review any PR adding a feature like that.
In my case, the resource constraints are required for deployment. I had not specified the heap sizes, assuming those will be calculated automatically, but the deployment failed. If you can confirm that's also the case for you, then we can reach a conclusion here. The problem then will be the following:
I'm happy to look into point 2, but point 1 (pending your verification) is something that the core team should investigate.
I am deploying bitnami/cassandra using the default values but with the following customization (disabling persistence and using the limits/requests you pasted above as "Bad":
--- a/bitnami/cassandra/values.yaml
+++ b/bitnami/cassandra/values.yaml
@@ -112,7 +112,7 @@ service:
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
- enabled: true
+ enabled: false
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -406,12 +406,12 @@ resources:
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
- limits: {}
- # cpu: 2
- # memory: 4Gi
- requests: {}
- # cpu: 2
- # memory: 4Gi
+ limits:
+ cpu: 2
+ memory: 4Gi
+ requests:
+ cpu: 2
+ memory: 4Gi
Those are the commands I'm using to deploy it:
$ kubectl create namespace cassandra
$ helm install cassandra bitnami/cassandra --namespace cassandra -f values.yaml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 7m25s
$ kubectl logs -f cassandra-0 -n cassandra
...
INFO [main] 2021-02-03 08:28:37,472 CassandraDaemon.java:497 - JVM vendor/version: OpenJDK 64-Bit Server VM/1.8.0_282
INFO [main] 2021-02-03 08:28:37,472 CassandraDaemon.java:498 - Heap size: 1.897GiB/1.897GiB
INFO [main] 2021-02-03 08:28:37,473 CassandraDaemon.java:503 - Code Cache Non-heap memory: init = 2555904(2496K) used = 4844672(4731K) committed = 4915200(4800K) max = 251658240(245760K)
INFO [main] 2021-02-03 08:28:37,473 CassandraDaemon.java:503 - Metaspace Non-heap memory: init = 0(0K) used = 18750744(18311K) committed = 19267584(18816K) max = -1(-1K)
INFO [main] 2021-02-03 08:28:37,473 CassandraDaemon.java:503 - Compressed Class Space Non-heap memory: init = 0(0K) used = 2260120(2207K) committed = 2490368(2432K) max = 1073741824(1048576K)
INFO [main] 2021-02-03 08:28:37,473 CassandraDaemon.java:503 - Par Eden Space Heap memory: init = 416940032(407168K) used = 116743832(114007K) committed = 416940032(407168K) max = 416940032(407168K)
INFO [main] 2021-02-03 08:28:37,473 CassandraDaemon.java:503 - Par Survivor Space Heap memory: init = 52101120(50880K) used = 0(0K) committed = 52101120(50880K) max = 52101120(50880K)
INFO [main] 2021-02-03 08:28:37,473 CassandraDaemon.java:503 - CMS Old Gen Heap memory: init = 1567621120(1530880K) used = 0(0K) committed = 1567621120(1530880K) max = 1567621120(1530880K)
...
As per the above log, everything is up and running as expected, in that section, the heap size values are shown (this value was calculated automatically by Cassandra since I didn't set any value in the values.yaml)
Looks like the heap size is approximately half of the memory limit. Obviously, I didn’t get to the point where the Cassandra instance started. I’m not sure what more information I can provide in order to help reproduce this issue. Feel free to close the ticket if you don’t see anything else to try.
It seems something related to the specific environment or configuration, that is why the chart allows users to set different options for those parameters since it is something related to the environments. We can maintain this issue as open just in case other users face the same issue adding more clues. It is going to be closed automatically if there is no news in a couple of weeks.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
This issue still exist in bitnami/cassandra 8.0.3 (all values is default), and not work after set the resource limits. My cluster is running in virtual machines。
Output of helm version:
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Unfortunately, I am not able to reproduce the issue. Using the latest version available in the Bitnami Helm Chart repository and installing it without any custom parameter everything works as expected, see
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm install cassandra bitnami/cassandra
NAME: cassandra
LAST DEPLOYED: Wed Sep 1 06:26:34 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
Cassandra can be accessed through the following URLs from within the cluster:
- CQL: cassandra.default.svc.cluster.local:9042
To get your password run:
export CASSANDRA_PASSWORD=$(kubectl get secret --namespace "default" cassandra -o jsonpath="{.data.cassandra-password}" | base64 --decode)
Check the cluster status by running:
kubectl exec -it --namespace default $(kubectl get pods --namespace default -l app=cassandra,release=cassandra -o jsonpath='{.items[0].metadata.name}') nodetool status
To connect to your Cassandra cluster using CQL:
1. Run a Cassandra pod that you can use as a client:
kubectl run --namespace default cassandra-client --rm --tty -i --restart='Never' \
--env CASSANDRA_PASSWORD=$CASSANDRA_PASSWORD \
\
--image docker.io/bitnami/cassandra:4.0.0-debian-10-r17 -- bash
2. Connect using the cqlsh client:
cqlsh -u cassandra -p $CASSANDRA_PASSWORD cassandra
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/cassandra 9042:9042 &
cqlsh -u cassandra -p $CASSANDRA_PASSWORD 127.0.0.1 9042
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cassandra default 1 2021-09-01 06:26:34.956860696 +0000 UTC deployed cassandra-8.0.3 4.0.0
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 2m20s
Although it seems an issue related to your specific environment, can you please provide more info just in case we can see any clue about what is happening? Is there any error in the logs (kubectl logs -f cassandra-0
in the above example)? In the same way, can you find something weird describing the pod (kubectl describe pod cassandra-0
)?
This is the log:
Setting node as password seeder
cassandra 07:56:19.03
cassandra 07:56:19.05 Welcome to the Bitnami cassandra container
cassandra 07:56:19.05 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-cassandra
cassandra 07:56:19.06 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-cassandra/issues
cassandra 07:56:19.06
cassandra 07:56:19.06 INFO ==> ** Starting Cassandra setup **
cassandra 07:56:19.12 INFO ==> Validating settings in CASSANDRA_* env vars..
cassandra 07:56:19.47 INFO ==> Initializing Cassandra database...
cassandra 07:56:20.22 INFO ==> Deploying Cassandra with persisted data
cassandra 07:56:20.23 INFO ==> Loading user's custom files from /docker-entrypoint-initdb.d ...
cassandra 07:56:20.24 INFO ==> Starting Cassandra
cassandra 07:56:20.25 INFO ==> Checking that it started up correctly
/opt/bitnami/scripts/libos.sh: line 269: 158 Killed "${cmd[@]}" "${args[@]}" > "$logger" 2>&1
pod describtion
Name: cass-cassandra-0
Namespace: iot-develop
Priority: 0
Node: master1/192.168.102.31
Start Time: Wed, 01 Sep 2021 16:05:46 +0800
Labels: app.kubernetes.io/instance=cass
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cassandra
controller-revision-hash=cass-cassandra-55f4445676
helm.sh/chart=cassandra-8.0.3
statefulset.kubernetes.io/pod-name=cass-cassandra-0
Annotations: <none>
Status: Running
IP: 10.234.71.250
IPs:
IP: 10.234.71.250
Controlled By: StatefulSet/cass-cassandra
Containers:
cassandra:
Container ID: docker://0f47e5acc7abec863bde194a5073da57e9ac116dbb22aa1f53e7ac506e9ab31c
Image: docker.io/bitnami/cassandra:4.0.0-debian-10-r17
Image ID: docker-pullable://bitnami/cassandra@sha256:626fa297a73e9cebf6c99fc46bd24682422e3b59ee2bc99f7a684a307ad6ec19
Ports: 7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Command:
bash
-ec
# Node 0 is the password seeder
if [[ $POD_NAME =~ (.*)-0$ ]]; then
echo "Setting node as password seeder"
export CASSANDRA_PASSWORD_SEEDER=yes
else
# Only node 0 will execute the startup initdb scripts
export CASSANDRA_IGNORE_INITDB_SCRIPTS=1
fi
/opt/bitnami/scripts/cassandra/entrypoint.sh /opt/bitnami/scripts/cassandra/run.sh
State: Running
Started: Wed, 01 Sep 2021 16:05:52 +0800
Ready: False
Restart Count: 0
Liveness: exec [/bin/bash -ec nodetool status
] delay=60s timeout=5s period=30s #success=1 #failure=5
Readiness: exec [/bin/bash -ec nodetool status | grep -E "^UN\\s+${POD_IP}"
] delay=60s timeout=5s period=10s #success=1 #failure=5
Environment:
BITNAMI_DEBUG: false
CASSANDRA_CLUSTER_NAME: cassandra
CASSANDRA_SEEDS: cass-cassandra-0.cass-cassandra-headless.iot-develop.svc.yunmotec-dev.local
CASSANDRA_PASSWORD: <set to the key 'cassandra-password' in secret 'cass-cassandra'> Optional: false
POD_IP: (v1:status.podIP)
POD_NAME: cass-cassandra-0 (v1:metadata.name)
CASSANDRA_USER: cassandra
CASSANDRA_NUM_TOKENS: 256
CASSANDRA_DATACENTER: dc1
CASSANDRA_ENDPOINT_SNITCH: SimpleSnitch
CASSANDRA_KEYSTORE_LOCATION: /opt/bitnami/cassandra/certs/keystore
CASSANDRA_TRUSTSTORE_LOCATION: /opt/bitnami/cassandra/certs/truststore
CASSANDRA_RACK: rack1
CASSANDRA_TRANSPORT_PORT_NUMBER: 7000
CASSANDRA_JMX_PORT_NUMBER: 7199
CASSANDRA_CQL_PORT_NUMBER: 9042
Mounts:
/bitnami/cassandra from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from cass-cassandra-token-6hxr9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-cass-cassandra-0
ReadOnly: false
cass-cassandra-token-6hxr9:
Type: Secret (a volume populated by a Secret)
SecretName: cass-cassandra-token-6hxr9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 98s default-scheduler Successfully assigned iot-develop/cass-cassandra-0 to master1
Normal Pulled 94s kubelet Container image "docker.io/bitnami/cassandra:4.0.0-debian-10-r17" already present on machine
Normal Created 93s kubelet Created container cassandra
Normal Started 92s kubelet Started container cassandra
Warning Unhealthy 25s kubelet Liveness probe failed: nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
Warning Unhealthy 6s (x3 over 25s) kubelet Readiness probe failed: nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
I am not able to see something useful in the logs nor in the description, it seems the application is starting but it is killed at some point; then, as expected, the probes in the pod are not working. For me, it looks like you will need to tune the limits of the cluster and/or the chart in order to make them work properly, but I tried to install it on different clusters (even minikube or kind) and it works on all of them with the default params.
I faced similar issue on a namespace which has istio service mesh attached to it, ie The istio side car was getting attached to the cassandra pod, Then i did the same install on another namespace which didnt have istio and everything worked fine as @carrodher mentioned above, So i guess its something to do with the namespace which has defaults attached to it.
It also worked for me only after setting the resources. Keeping the default settings failed with the same error as mentioned above.
When following @asarkar's suggested values:
2024-04-07T00:39:26.678948214Z Setting node as password seeder
2024-04-07T00:39:26.690811336Z cassandra 00:39:26.69 INFO ==>
2024-04-07T00:39:26.691408008Z cassandra 00:39:26.69 INFO ==> Welcome to the Bitnami cassandra container
2024-04-07T00:39:26.691990081Z cassandra 00:39:26.69 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers>
2024-04-07T00:39:26.692563313Z cassandra 00:39:26.69 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues>
2024-04-07T00:39:26.693126085Z cassandra 00:39:26.69 INFO ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit https://bitnami.com/enterprise>
2024-04-07T00:39:26.693656758Z cassandra 00:39:26.69 INFO ==>
2024-04-07T00:39:26.694299040Z cassandra 00:39:26.69 INFO ==> ** Starting Cassandra setup **
2024-04-07T00:39:26.700822393Z cassandra 00:39:26.70 WARN ==> CASSANDRA_HOST not set, defaulting to system hostname
2024-04-07T00:39:26.703110132Z cassandra 00:39:26.70 INFO ==> Validating settings in CASSANDRA_* env vars..
2024-04-07T00:39:26.715371540Z cassandra 00:39:26.71 INFO ==> Initializing Cassandra database...
2024-04-07T00:39:26.819212920Z cassandra 00:39:26.81 INFO ==> Deploying Cassandra from scratch
2024-04-07T00:39:26.819846082Z cassandra 00:39:26.81 INFO ==> Starting Cassandra
2024-04-07T00:39:26.821261973Z cassandra 00:39:26.82 INFO ==> Checking that it started up correctly
2024-04-07T00:40:44.453779910Z /opt/bitnami/scripts/libos.sh: line 363: 154 Killed "${cmd[@]}" "${args[@]}" > "$logger" 2>&1
Setting node as password seeder
cassandra 01:44:36.84 INFO ==>
cassandra 01:44:36.84 INFO ==> Welcome to the Bitnami cassandra container
cassandra 01:44:36.84 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers>
cassandra 01:44:36.84 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues>
cassandra 01:44:36.84 INFO ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit https://bitnami.com/enterprise>
cassandra 01:44:36.84 INFO ==>
cassandra 01:44:36.84 INFO ==> ** Starting Cassandra setup **
cassandra 01:44:36.85 WARN ==> CASSANDRA_HOST not set, defaulting to system hostname
cassandra 01:44:36.85 INFO ==> Validating settings in CASSANDRA_* env vars..
cassandra 01:44:36.86 INFO ==> Initializing Cassandra database...
cassandra 01:44:36.96 INFO ==> Deploying Cassandra from scratch
cassandra 01:44:36.96 INFO ==> Starting Cassandra
cassandra 01:44:36.96 INFO ==> Checking that it started up correctly
cassandra 01:45:36.98 INFO ==> Found CQL startup log line
cassandra 01:45:37.91 INFO ==> Nodetool reported the successful startup of Cassandra
cassandra 01:45:37.91 INFO ==> Password seeder node
cassandra 01:45:37.91 INFO ==> Trying to access CQL server @ cassandra-0.cassandra-headless.creatorscope.svc.cluster.local
cassandra 01:45:48.76 INFO ==> Accessed CQL server successfully
cassandra 01:45:48.76 INFO ==> Updating the password for the "cassandra" user...
cassandra 01:45:49.25 INFO ==> Trying to access CQL server @ cassandra-0
cassandra 01:45:55.03 INFO ==> Accessed CQL server successfully
cassandra 01:45:55.03 INFO ==> Password updated successfully
cassandra 01:45:55.05 INFO ==> ** Cassandra setup finished! **
cassandra 01:45:55.06 INFO ==> ** Starting Cassandra **
cassandra 01:45:55.06 INFO ==> Cassandra already running with PID 154 because of the initial cluster setup
cassandra 01:45:55.06 INFO ==> Tailing /opt/bitnami/cassandra/logs/cassandra_first_boot.log
Worked fine when setting resource preset to 2xlarge
.
Using the Helm chart, or just deploying the image using a
Deployment
object manifest, it fails to start. The only error shown on the console is the following:The line number seems to change depending on the parameters passed to the chart/deployment but not too much, and it's always
libos.sh
.Which chart: bitnami/cassandra 7.2.0
Describe the bug Cassandra doesn't start, hence liveness check fails, and the POD gets restarted.
To Reproduce
Expected behavior It's supposed to work?
Version of Helm and Kubernetes:
helm version
:kubectl version
: