Open walterEri opened 4 years ago
@walterEri It seems that the bookie is using an OLD volume that contains OLD data. You need to format the disk if you re-use an OLD volume for a new bookie.
In my scenario i have 3 bookie and each pod attached to separate pv and pvc. Then i scaled down to 2 pod and deleted pv and pvc for the 3rd pod. so we don't have OLD volume of 3rd pod.
Then again i have scaled up to 3 bookie. so it's creating a new pod (3rd) and attaching new pv and pvc to the 3rd pod. even though i am getting the same error.
I see. @walterEri when you scale down, you need to use bin/bookkeeper shell decommissionbookie
command to decommission the removed bookie before scaling up.
i tried decommission bookie but it's ran more than one day so i killed the process
@sijie still my decommission the bookie is not fixed. it's taking very long time to execute. is any other way to decommission the bookie
You need to scale up the auto-recovery job. Because decommission a bookie requires re-replicating the entries that are originally stored in that bookie.
Let's say that in an unexpected accident that we lost all of the bookie data. Is there any option to force delete relative ledgers and force decommission the bookie?
@walterEri It seems that the bookie is using an OLD volume that contains OLD data. You need to format the disk if you re-use an OLD volume for a new bookie.
Hi @sijie , could you advise how to check the OLD volume or OLD data? Thanks.
Describe the bug we have deployed pulsar using helm chart in openshfit k8s environment. when we scale up the deleted volume bookie we encounter with below issue.
To Reproduce Steps to reproduce the behavior :
Expected behavior scale up the bookie pod should not cause the problem.