Brick process should run on the node post gluster node reboot.
Details on how to reproduce (minimal and precise)
1) Create a 3 node GCS setup using vagrant.
2) Create a PVC (brick-mux is not enabled).
3) Reboot gluster-node-1 and check glustercli volume status on the other gluster nodes.
4) I have set "systemctl enable glusterd2.service" on the gluster node but for some reason glusterd2 process didn't come up automatically. So, reboot the node again.
5) This time glusterd2 service started automatically and check glustercli volume status.
Information about the environment:
Glusterd2 version used (e.g. v4.1.0 or master): v6.0-dev.115.gitf469248
Operating system used:
Glusterd2 compiled from sources, as a package (rpm/deb), or container:
Using External ETCD: (yes/no, if yes ETCD version): yes
If container, which container image:
Using kubernetes, openshift, or direct install:
If kubernetes/openshift, is gluster running inside kubernetes/openshift or outside: kubernetes
Observed behavior
Having a single PVC (without brick-mux enabled), reboot gluster-node-1 and post reboot, brick process on gluster-node-1 is not running.
Below messages are continuously seen in glusterd2 logs,
Expected/desired behavior
Brick process should run on the node post gluster node reboot.
Details on how to reproduce (minimal and precise)
1) Create a 3 node GCS setup using vagrant. 2) Create a PVC (brick-mux is not enabled). 3) Reboot gluster-node-1 and check glustercli volume status on the other gluster nodes. 4) I have set "systemctl enable glusterd2.service" on the gluster node but for some reason glusterd2 process didn't come up automatically. So, reboot the node again. 5) This time glusterd2 service started automatically and check glustercli volume status.
Information about the environment: