Closed willfindlay closed 2 years ago
Hi @willfindlay
Why did you change the data mountpath to /data
? Are you using a custom image? Please note that Bitnami Cassandra image expects the data to be persisted at /bitnami/cassandra
and data persistence won't work if you switch the mountpath.
Hi @willfindlay
Why did you change the data mountpath to
/data
? Are you using a custom image?
I was getting a permission error and changing the mount point seemed to fix it.
Please note that Bitnami Cassandra image expects the data to be persisted at
/bitnami/cassandra
and data persistence won't work if you switch the mountpath.
I didn't know that, thanks for the tip. Do you think this could actually be the reason the second pod fails to start up? Perhaps it's getting hung up on the step where it tries to mount the PVC?
Here's the specific error I'm getting without the custom pathname:
cassandra 21:46:42.05
cassandra 21:46:42.06 Welcome to the Bitnami cassandra container
cassandra 21:46:42.06 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-cassandra
cassandra 21:46:42.06 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-cassandra/issues
cassandra 21:46:42.06
cassandra 21:46:42.06 INFO ==> ** Starting Cassandra setup **
cassandra 21:46:42.10 INFO ==> Validating settings in CASSANDRA_* env vars..
cassandra 21:46:42.15 INFO ==> Initializing Cassandra database...
mkdir: cannot create directory '/bitnami/cassandra/data': Permission denied
Ah, I was able to fix the permission error by setting volumePermissions.enabled
. I'll double-check to see if that also resolves my original issue.
great @willfindlay !! Please keep us updated.
The volumePermissions.enabled
solution is perfect for StorageClasses that are not compatible with adapting the ownership of the filesystem based on the POD SecurityContext.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Oops, forgot to circle back here. Making those changes did indeed resolve my issue.
Which chart: cassandra-9.0.4
Describe the bug After deploying Cassandra with a replication count of 2 and running
kubectl rollout restart statefulset cassandra
, the podcassandra-1
fails its readiness and liveness checks withFailed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
.To Reproduce
Steps to reproduce the behavior:
Set up a local minikube cluster with 3 nodes:
Run
helm install cassandra bitnami/cassandra -f cassandra.yml
using the following values:Observe that it works just fine:
Then try to restart the statefulset:
Observe that it gets stuck with
cassandra-1
restarting over and over. Usekubectl describe pod cassandra-1
to get more info. Looks like the following:Expected behavior
It should restart both pods in the statefulset without issues.
Version of Helm and Kubernetes:
helm version
:kubectl version
: