Closed zhangkesheng closed 5 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Hello All,
I have deployed Kafka and Zookeeper in a Kubernetes cluster with with three replicas, using the zookeeper.yaml. When three node are simultaneously powered off and restarted, zookeeper restart failed. The distribution is as follows: zk-0 is assigned to the node with myid=2;
zk-1 is assigned to the node with myid=3;
zk-2 is assigned to the node with myid=1;
Then, zk restar failed. Hope to get a solution, Thanks!