Closed jazzl0ver closed 6 years ago
yes, this will be general process to replace the bad volume for the service.
Wondering how the volume gets deleted. FireCamp will not delete the volume, even when you delete the service. Did you accidentally delete the volume?
yes, this will be general process to replace the bad volume for the service.
I'm sorry - did not get this. What is the general process?
Yeah, that was my lame hands :)
I'm sorry - did not get this. What is the general process? Replacing the bad volume is the common feature. It is not for cassandra only. It will work for all services.
Could you please give me some guidance on how to do that?
Please, shed some light on where the stack keeps volumes ID? It's really hard to find..
This is not a simple work.
Replacing the volume is easy. The bad volume id could be passed in as the parameter. The manage server could list all ServiceMember of the service, find out the bad volume belongs to which member, create a new volume and replace the bad volume in the service member. The volume plugin will automatically pick up the new volume.
While, things are more complex for each service, as different services have different internal mechanism. We need to follow and revise Cassandra's procedure. If the data volume fails, the cassandra node needs to be replaced, refer to https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsReplaceNode.html. If the journal volume fails, we might be able to simply replace the volume and do node repair, refer to https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRecoverUsingJBOD.html. If the node repair fails, we have to switch back to replace the node.
Let's try the simple solution first. We could add code to simply replace the cassandra volume. After the container starts, use cql to login to that container and run nodetool repair -full -local -seq
. If repair succeeds, check the data consistency from the application. If everything is good, then we fix the issue for now. If not, you will have to recreate the Cassandra service. We will support the Cassandra node replacement later.
@JuniusLuo , that would be great! Looking forward for this to be implemented! And thank you for the detailed answers!
We added one tool to replace the volume for service. When the total solution is ready, we could remove this tool. You could get the tool from https://s3.amazonaws.com/cloudstax/firecamp/releases/latest/packages/firecamp-volume-replace.tgz
Recovery step:
firecamp-volume-replace -cluster=t1 -service=mycas -bad-volumeid=xxx -new-volumeid=xxx
cqlsh mycas-0.t1-firecamp.com -u newsuperuser -p super
nodetool repair -full -local -seq
If nodetool print out something like below, the repair succeeds. Go ahead and check the application.
[2017-12-13 23:02:33,585] Repair completed successfully
[2017-12-13 23:02:33,589] Repair command #1 finished in 0 seconds
@JuniusLuo, worked like a charm! Thank you so much!
Cool. Glad it works.
After some manipulations it appears volumes of a Cassandra replica were accidentally deleted. At least, I see the following in the firecamp logs (/var/log/firecamp/firecamp-dockervolume.ERROR) on one of EC2 instances:
This leads to the following task event:
Is there a way to re-create the failed replica without re-launching the whole Cassandra service from scratch?