Open techtronix-868 opened 1 week ago
Did you check the filesystem hosting your brick is actually in good condition and mounted?
The filesystem in good state and mounted based on that I figured the data is lost under the mountpoint.
The filesystem in good state and mounted based on that I figured the data is lost under the mountpoint.
Looks like Bricks were not mounted while starting the volume (After reboot). If the backend brick paths are mounted, please try gluster volume start internaldatastore3 force
. GlusterFS will not delete the .glusterfs
directory even after Volume delete, so the issue is most likely brick mount issue. Please check df /datastore3
or mount | grep datastore3
in each node.
The /dev/sda is mounted on the same point. I performed 12 iteration on my rented dell Baremetal. This only happens when gluster is not able to gracefully exit . Does gluster have a write-cache that get written on the mounted points. Are there transactions that can be used to ensure that the data has been written on the disk.
Description of problem: I have configured a GlusterFS setup with three storage nodes in a replica configuration. Recently, I observed unexpected behavior when two of the nodes were power cycled. After the power cycle, I noticed that the .glusterfs directory and other files under the volume mount point were missing. Additionally, the GlusterFS brick did not come up as expected, which was evident from the logs in bricks/datastore3.log.
The exact command to reproduce the issue:
The full output of the command that failed:
- The operating system / glusterfs version: glusterfs 9.4 OS Release:- ALMA 8.6
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration