Open alexbarta opened 5 years ago
Hey Alex,
It seems that the restore process tried to restore in to itself. Try doing the same, but run the restore pod with different labels from the cassandra pod. For example:
kubectl run cassandra-restore --rm --labels='app=cassandra-cain' --serviceaccount='cassandra-backup' -i --tty --restart=Never --image-pull-policy=IfNotPresent --image nuvo/cain:0.5.1 --env 'AWS_ACCESS_KEY_ID=admin' --env 'AWS_SECRET_ACCESS_KEY=******' --env 'AWS_S3_NO_SSL=true' --env 'AWS_S3_FORCE_PATH_STYLE=true' --env 'AWS_S3_ENDPOINT=http://minio-svc.default.svc.cluster.local:9000' --command -- sh
Yes, I'm an idiot :( Thanks Maor
Not at all! In fact, this can and should be more visible to the user.
For example, if cain printed out the list of pods that it found, you would have seen this right away!
Would you like to submit a PR to implement this behavior? ;)
cc @HagaiBarel, do you approve?
Sure why not! At moment I'm struggling with backup/restore of more complex schemas.. I have found other cryptic error messages.. I will raise separate issues for those
Hi Maor,
I'm running into this error. Cain fails saying that data directory does not exists, when actually it exists.
my cassandra-0 pod path
when running older versions I'm not getting the cassandra data dir error..
Regards