Open toyangdon opened 6 years ago
We're running into this same problem. Does anyone know the purpose of the glusterfs-run volume with an emptyDir type: https://github.com/gluster/gluster-kubernetes/blob/master/deploy/kube-templates/glusterfs-daemonset.yaml#L101?
We are also observing this issue, running on gluster 4.1.7.
It claims it found an already-running brick and that brick is never started. The containers that rely on it hang or enter a crash loop. It's pretty easy to reproduce, you just have to hard-power off the VM hosting gluster or use a kill -9 on the gluster processes.
Should this issue be moved to bugzilla? https://bugzilla.redhat.com/
Same for me (gluster 4.1.7). Any news on this issue?
could it be that pidfile of the brick has a pid which was points to some other running process? If this is still reproducible, can you check the pid ?
When I recreate gluster container,some bricks in this node is offline. I find something in glusterd.log:
Some brick is considered as "already-running brick ",but they are not rumming in fact. I try to remove volume mount "glusterfs-run",and then I find all of bricks become normal after recreate gluster container.
What the use of the volume mount "glusterfs-run" in glusterfs-daemonset.yaml ? Can I remove it?