Open techtronix-868 opened 5 months ago
Did you check the filesystem hosting your brick is actually in good condition and mounted?
The filesystem in good state and mounted based on that I figured the data is lost under the mountpoint.
The filesystem in good state and mounted based on that I figured the data is lost under the mountpoint.
Looks like Bricks were not mounted while starting the volume (After reboot). If the backend brick paths are mounted, please try gluster volume start internaldatastore3 force
. GlusterFS will not delete the .glusterfs
directory even after Volume delete, so the issue is most likely brick mount issue. Please check df /datastore3
or mount | grep datastore3
in each node.
The /dev/sda is mounted on the same point. I performed 12 iteration on my rented dell Baremetal. This only happens when gluster is not able to gracefully exit . Does gluster have a write-cache that get written on the mounted points. Are there transactions that can be used to ensure that the data has been written on the disk.
GlusterFS has several performance translators (performance.write-behind) that could cause files not currently written to the underlying brick to be lost during a powerless event. I would be more concerned with the underlying storage subsystem's I/O mode, do your systems leverage RAID and does the controller/HBA have a battery backup? If yes, what operating modes are the configured as write-back or write-through? Another thing to look out for is journaled file systems like XFS may not mount properly or in a timely manner after any sudden or unexpected shutdown/reboot. This could cause issues with GlusterFSD not being able to attach to the affected storage device. Is this problem isolated to a single host's bricks or is sporadic (i.e. random bricks in the volume fail to start after an unexpected shutdown/reboot)? Everything points to the underlying storage configuration as the culprit and Gluster's inability to start properly is merely a consequence.
Description of problem: I have configured a GlusterFS setup with three storage nodes in a replica configuration. Recently, I observed unexpected behavior when two of the nodes were power cycled. After the power cycle, I noticed that the .glusterfs directory and other files under the volume mount point were missing. Additionally, the GlusterFS brick did not come up as expected, which was evident from the logs in bricks/datastore3.log.
The exact command to reproduce the issue:
The full output of the command that failed:
- The operating system / glusterfs version: glusterfs 9.4 OS Release:- ALMA 8.6
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration