Closed behko closed 5 years ago
Could you post your docker-compose.yml
This looks like your issue if that 10.0.0.* address is from the overlay network: https://github.com/docker-library/cassandra/issues/168 and also https://github.com/docker-library/cassandra/issues/169
I think you need a -e CASSANDRA_BROADCAST_ADDRESS=10.0.0.*
Under the section "For separate machines (ie, two VMs ..." https://github.com/docker-library/docs/tree/master/cassandra#make-a-cluster
I don't really see anything we can change in the image to make this easier, unfortunately. The best I can recommend from here is to try the Docker Community Forums, the Docker Community Slack, or Stack Overflow for further help setting up and configuring a cluster.
(Additionally, strapdata/elassandra:latest
is not this image.)
I have 3 nodes of elassandra running in docker containers.
Containers created like:
Cluster was working fine for a couple of days since created, elastic, cassandra all was perfect.
Currently however all cassandra nodes became unreachable to each other: Nodetool status on all nodes is like
Where the UN is the current host 10.0.0.1 Same on all other nodes.
Nodetool describecluster on 10.0.0.1 is like
When attached to the first node its only repeating these infos:
After a while when some node is restarted:
Tried so far: Restarting all containers at the same time Restarting all containers one after another Restarting cassandra in all containers like : service cassandra restart Nodetool disablegossip then enable it Nodetool repair : Repair command #1 failed with error Endpoint not alive: /10.0.0.2
Seems that all node schemas are different, but I still dont understand why they are marked as down to each other.