cloudstax / firecamp

Serverless Platform for the stateful services
https://www.cloudstax.io
Apache License 2.0
209 stars 20 forks source link

Cassandra replica restoration #14

Closed jazzl0ver closed 6 years ago

jazzl0ver commented 6 years ago

After some manipulations it appears volumes of a Cassandra replica were accidentally deleted. At least, I see the following in the firecamp logs (/var/log/firecamp/firecamp-dockervolume.ERROR) on one of EC2 instances:

E1212 12:36:32.039874      13 volume.go:851] detach journal volume from last owner error NotFound requuid 172.22.5.224-bda1319c0a71481456f7689bb2b61571-1513082191 {vol-00fd036927e65754a /dev/xvdj vol-0d1b18609d7b32e1c /dev/xvdk} &{bda1319c0a71481456f7689bb2b61571 2 cass-qa-2 us-east-1c arn:aws:ecs:us-east-1:ID:task/e36d526c-1007-4cf4-a3ca-ff962674c632 arn:aws:ecs:us-east-1:ID:container-instance/f822e87a-47c1-4a68-a8e8-9ccbe23e9009 i-0bd3125d1e463d369 1513070833991537870 {vol-00fd036927e65754a /dev/xvdj vol-0d1b18609d7b32e1c /dev/xvdk} 127.0.0.1 [0xc4202eaa20 0xc4202eaa80 0xc4202eaab0 0xc4202eab10]}
E1212 12:36:32.039896      13 volume.go:729] Mount failed, get service member error NotFound, serviceUUID bda1319c0a71481456f7689bb2b61571, requuid 172.22.5.224-bda1319c0a71481456f7689bb2b61571-1513082191
E1212 12:36:43.873859      13 ec2.go:222] failed to DescribeVolumes vol-0d1b18609d7b32e1c error InvalidVolume.NotFound: The volume 'vol-0d1b18609d7b32e1c' does not exist.
        status code: 400, request id: a3acc2b9-f47a-4ec2-8364-74b627cc89c0 requuid 172.22.5.224-bda1319c0a71481456f7689bb2b61571-1513082203
        E1212 12:36:43.873876      13 ec2.go:177] GetVolumeInfo vol-0d1b18609d7b32e1c error InvalidVolume.NotFound: The volume 'vol-0d1b18609d7b32e1c' does not exist.
                status code: 400, request id: a3acc2b9-f47a-4ec2-8364-74b627cc89c0 requuid 172.22.5.224-bda1319c0a71481456f7689bb2b61571-1513082203
                E1212 12:36:43.873884      13 ec2.go:162] GetVolumeState vol-0d1b18609d7b32e1c error InvalidVolume.NotFound: The volume 'vol-0d1b18609d7b32e1c' does not exist.
                        status code: 400, request id: a3acc2b9-f47a-4ec2-8364-74b627cc89c0 requuid 172.22.5.224-bda1319c0a71481456f7689bb2b61571-1513082203
                        E1212 12:36:43.873893      13 volume.go:1227] GetVolumeState error NotFound volume vol-0d1b18609d7b32e1c ServerInstanceID i-0bd3125d1e463d369 device /dev/xvdk requuid 172.22.5.224-bda1319c0a71481456f7689bb2b61571-1513082203

This leads to the following task event:

Status reason | CannotStartContainerError:  API error (500): error while mounting volume  '/var/lib/docker/plugins/4f11459ccd04e2f94009d96f631266758d8c3bc4fb120e1f9376a9bd568c1792/rootfs':  VolumeDriver.Mount: Mount failed, get service member error NotFound,  serviceUUID bda

Is there a way to re-create the failed replica without re-launching the whole Cassandra service from scratch?

JuniusLuo commented 6 years ago

yes, this will be general process to replace the bad volume for the service.

Wondering how the volume gets deleted. FireCamp will not delete the volume, even when you delete the service. Did you accidentally delete the volume?

jazzl0ver commented 6 years ago

yes, this will be general process to replace the bad volume for the service.

I'm sorry - did not get this. What is the general process?

Yeah, that was my lame hands :)

JuniusLuo commented 6 years ago

I'm sorry - did not get this. What is the general process? Replacing the bad volume is the common feature. It is not for cassandra only. It will work for all services.

jazzl0ver commented 6 years ago

Could you please give me some guidance on how to do that?

jazzl0ver commented 6 years ago

Please, shed some light on where the stack keeps volumes ID? It's really hard to find..

JuniusLuo commented 6 years ago

This is not a simple work.

Replacing the volume is easy. The bad volume id could be passed in as the parameter. The manage server could list all ServiceMember of the service, find out the bad volume belongs to which member, create a new volume and replace the bad volume in the service member. The volume plugin will automatically pick up the new volume.

While, things are more complex for each service, as different services have different internal mechanism. We need to follow and revise Cassandra's procedure. If the data volume fails, the cassandra node needs to be replaced, refer to https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsReplaceNode.html. If the journal volume fails, we might be able to simply replace the volume and do node repair, refer to https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRecoverUsingJBOD.html. If the node repair fails, we have to switch back to replace the node.

JuniusLuo commented 6 years ago

Let's try the simple solution first. We could add code to simply replace the cassandra volume. After the container starts, use cql to login to that container and run nodetool repair -full -local -seq. If repair succeeds, check the data consistency from the application. If everything is good, then we fix the issue for now. If not, you will have to recreate the Cassandra service. We will support the Cassandra node replacement later.

jazzl0ver commented 6 years ago

@JuniusLuo , that would be great! Looking forward for this to be implemented! And thank you for the detailed answers!

JuniusLuo commented 6 years ago

We added one tool to replace the volume for service. When the total solution is ready, we could remove this tool. You could get the tool from https://s3.amazonaws.com/cloudstax/firecamp/releases/latest/packages/firecamp-volume-replace.tgz

Recovery step:

  1. create a new volume in the same availability zone.
  2. run the tool: firecamp-volume-replace -cluster=t1 -service=mycas -bad-volumeid=xxx -new-volumeid=xxx
  3. login to the Cassandra node. For example, if the first member is bad, cqlsh mycas-0.t1-firecamp.com -u newsuperuser -p super
  4. run nodetool repair -full -local -seq If nodetool print out something like below, the repair succeeds. Go ahead and check the application.
    [2017-12-13 23:02:33,585] Repair completed successfully
    [2017-12-13 23:02:33,589] Repair command #1 finished in 0 seconds
  5. check the application data consistency.
jazzl0ver commented 6 years ago

@JuniusLuo, worked like a charm! Thank you so much!

JuniusLuo commented 6 years ago

Cool. Glad it works.