Closed ZZZanoY closed 5 years ago
Sorry for the delay @ZZZanoY I'm on vacation ATM, will have a look next week.
Hi @mhausenblas, I'm running into the same problem. Any update on this?
Oops, that one fell between the cracks. Hmmm, no idea TBH, not sure how I can re-create it.
In my understanding Burry doesn't restore already existing znodes (it doesn't overwrite). But in this case it seems like it fails because it thinks a znode doesn't exist, but when it is going to create it, it does exist (in our case because Kafka has eventually created it, I guess).
What about ignoring these kind of errors or providing a flag (like --ignore-existing
or something similar) that simply ignore already existing znodes?
Ah! SGTM yes. I'm currently on a tight schedule with little cycles left, anyone wants to send in a patch for it?
Hey Great tool. I just have one issue with it. I am running a Kubernetes cluster with a Zookeeper and Kafka. Both of these are running single instance clusters. Creating the backup from zookeeper and pushing it to s3 works flawlessly, but the problem I have is that I am unable to restore a fresh zookeeper from that backup when kafka is running. This is what I ran:
burry --endpoint=localhost:2181 --operation=restore --target=s3 --snapshot=1534148038 --credentials=s3.amazonaws.com,ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID,SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY,BUCKET=Example_bucket
From what I can tell it seems like burry first successfully restores /brokers/topics/ExampleTopic-Source but before it manages to restore brokers/topics/ExampleTopic-Source/partitions, kafka has already created that node.
Is this a known limitation or am I doing something wrong? Thanks!