Open rhymerjr opened 1 year ago
I have not found any error "No space left of device" in the logs, Please correct me if I am wrong. Are u able to reproduce it?
That example I have provided just demonstrates how the inode size is decreasing, "No space left of device" is not present in these logs. But I think I can easily reproduce it with a script! It may take a while...
Thank you for your answer!
I have been re-created the volume from scratch and started the volume. I started to create and delete files by a script periodically using the command "dd".
This was the maximum fillment during the test process:
Filesystem Size Used Avail Use% Mounted on /dev/sdc1 989M 290M 632M 32% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 989M 300M 632M 33% /mnt/gfs_test_vol01
This is the situation after I have detected "No space left on device":
Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 55437 10099 85% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 55437 10099 85% /mnt/gfs_test_vol01
Filesystem Size Used Avail Use% Mounted on /dev/sdc1 989M 217M 705M 24% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 989M 227M 705M 25% /mnt/gfs_test_vol01
The directory is empty:
ls -la /mnt/gfs_test_vol01/ total 16 drwxr-xr-x 4 root root 12288 May 19 21:13 . drwxr-xr-x 4 root root 4096 May 19 12:27 ..
Logfiles:
So, I'm now a bit confused why I have "No space left on device", because I still have 15% of inodes free. It's also a fact that inode consumption is at 85% and the other stands on ~25%, which I don't really understand, while the directory is empty at the end.
Volume status:
root@um-dmz-gfs-node-d11-a: gluster volume status gfs_test_vol01 Status of volume: gfs_test_vol01 Gluster process TCP Port RDMA Port Online Pid Brick um-dmz-gfs-node-d11-a:/data/glusterfs /test_vol01/brick/brick1 60760 0 Y 1750 Brick um-dmz-gfs-node-d11-b:/data/glusterfs /test_vol01/brick/brick2 56957 0 Y 848 Brick um-dmz-gfs-node-d11-c:/data/glusterfs /test_vol01/brick/brick3 49190 0 Y 845 Self-heal Daemon on localhost N/A N/A Y 1782 Self-heal Daemon on um-dmz-gfs-node-d11-c N/A N/A Y 877 Self-heal Daemon on um-dmz-gfs-node-d11-b N/A N/A Y 880
Task Status of Volume gfs_test_vol01 There are no active volume tasks
Volume info root@um-dmz-gfs-node-d11-a:# gluster volume info gfs_test_vol01
Volume Name: gfs_test_vol01 Type: Distributed-Replicate Volume ID: 45dd2cc6-c72d-4704-ab74-9593e186fe63 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: um-dmz-gfs-node-d11-a:/data/glusterfs/test_vol01/brick/brick1 Brick2: um-dmz-gfs-node-d11-b:/data/glusterfs/test_vol01/brick/brick2 Brick3: um-dmz-gfs-node-d11-c:/data/glusterfs/test_vol01/brick/brick3 Options Reconfigured: cluster.granular-entry-heal: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off
Resources are exactly the same on all nodes:
root@um-dmz-gfs-node-d11-a:# df -h /data/glusterfs/test_vol01/brick /mnt/gfs_test_vol01 Filesystem Size Used Avail Use% Mounted on /dev/sdc1 989M 217M 705M 24% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 989M 227M 705M 25% /mnt/gfs_test_vol01 root@um-dmz-gfs-node-d11-a:# df -i /data/glusterfs/test_vol01/brick /mnt/gfs_test_vol01 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 55398 10138 85% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 55398 10138 85% /mnt/gfs_test_vol01
root@um-dmz-gfs-node-d11-b:# df -h /data/glusterfs/test_vol01/brick /mnt/gfs_test_vol01 Filesystem Size Used Avail Use% Mounted on /dev/sdc1 989M 217M 705M 24% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 989M 227M 705M 25% /mnt/gfs_test_vol01 root@um-dmz-gfs-node-d11-b:# df -i /data/glusterfs/test_vol01/brick /mnt/gfs_test_vol01 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 55397 10139 85% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 55398 10138 85% /mnt/gfs_test_vol01
root@um-dmz-gfs-node-d11-c:# df -h /data/glusterfs/test_vol01/brick /mnt/gfs_test_vol01 Filesystem Size Used Avail Use% Mounted on /dev/sdc1 989M 217M 705M 24% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 989M 227M 705M 25% /mnt/gfs_test_vol01 root@um-dmz-gfs-node-d11-c:# df -i /data/glusterfs/test_vol01/brick /mnt/gfs_test_vol01 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 55398 10138 85% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 55398 10138 85% /mnt/gfs_test_vol01
Is there any update on this bug. We are also seeing this kind of case, where for each file/directory creation there are 3-inodes getting used-up and after deleting those only one inode is released. If we continue the creation/deletion of files, we are eventually exhausting all inodes.
Is there any update on this bug. We are also seeing this kind of case, where for each file/directory creation there are 3-inodes getting used-up and after deleting those only one inode is released. If we continue the creation/deletion of files, we are eventually exhausting all inodes.
I tried to reproduce an issue but i could not reproduce it. The issue is introduced after this https://github.com/gluster/glusterfs/pull/4179 so i have sent a request to revert the patch.
@mohit84 Reproducing the issue is pretty straightforward(same as mentioned by - @rhymerjr ). Please consider the following steps.
We are seeing this issue in GlusterFS-6.10, And also in latest Kadalu-GlusterFS as-well - root@server-common-storage-pool-0-0:/# gluster --version glusterfs 2023.04.17 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. https://www.gluster.org/
You are assuming getting ENOSPC because of facing low inodes on the backend but my assumption is it is not because of low inodes , the issue is coming because of wrong handling of reserve space by posix_xlator. In case of low inodes posix_writev does not throw any error because at that time fd is already opened. if you check the client logs it is showing clearly writev is throwing ENOSPC that is because of wrong handling of write_val at posix xlator layer. I have tried to execute a test case as you shared, as you can see the count is same after unlink the file.
for i in {1..10}; do touch /mnt/test/file$i; df -i /brick{2..4}; rm -rf /mnt/test/file$i; df -i /brick{2..4}; done Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281790 179254402 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281790 179254402 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281790 179254402 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281789 179254403 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281789 179254403 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281789 179254403 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281790 179254402 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281790 179254402 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281790 179254402 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281789 179254403 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281789 179254403 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281789 179254403 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281790 179254402 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281790 179254402 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281790 179254402 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281789 179254403 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281789 179254403 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281789 179254403 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281790 179254402 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281790 179254402 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281790 179254402 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281789 179254403 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281789 179254403 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281789 179254403 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281790 179254402 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281790 179254402 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281790 179254402 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281789 179254403 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281789 179254403 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281789 179254403 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281790 179254402 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281790 179254402 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281790 179254402 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281789 179254403 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281789 179254403 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281789 179254403 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281790 179254402 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281790 179254402 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281790 179254402 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281789 179254403 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281789 179254403 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281789 179254403 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281790 179254402 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281790 179254402 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281790 179254402 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281789 179254403 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281789 179254403 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281789 179254403 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281790 179254402 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281790 179254402 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281790 179254402 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281789 179254403 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281789 179254403 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281789 179254403 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281790 179254402 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281790 179254402 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281790 179254402 2% /brick4 Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/thinpool_vg3-thinvol3 182536192 3281789 179254403 2% /brick2 /dev/mapper/thinpool_vg4-thinvol4 182536192 3281789 179254403 2% /brick3 /dev/mapper/thinpool_vg5-thinvol5 182536192 3281789 179254403 2% /brick4
Thanks, Mohit Agrawal
On Wed, Jun 21, 2023 at 4:14 PM mohammaddawoodshaik < @.***> wrote:
@mohit84 https://github.com/mohit84 Reproducing the issue is pretty straightforward(same as mentioned by - @rhymerjr https://github.com/rhymerjr ). Please consider the following steps.
- Go to brick path and get the inode data - df -i /
- let's assume FreeIndoes will be - 10
- In the GlusterFS FUSE mount create a random file - touch abc.txt
- Go back to brick path and get the inode data - df -i /
- Now it FreeInodes will be - 7 - Now from GlusterFS FUSE mount delete the created file
- Now in in brick path when you check for free inodes, it will be - 8
- Here for each cycle Gluster will be leaking two inodes.
We are seeing this issue in GlusterFS-6.10, And also in latest Kadalu-GlusterFS as-well - @.***:/# gluster --version glusterfs 2023.04.17 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. https://www.gluster.org/
— Reply to this email directly, view it on GitHub https://github.com/gluster/glusterfs/issues/4157#issuecomment-1600608358, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEJHTKGTBQDVRU7W5N2GSUTXMLGBTANCNFSM6AAAAAAYHVQCIU . You are receiving this because you were mentioned.Message ID: @.***>
@mohit84 - Thanks for the quick turnaround Mohit. I will also try to run the same sample in my setup and share the results. If possible could you share with me the Gluster-version you have in your setup?
@mohit84 - Thanks for the quick turnaround Mohit. I will also try to run the same sample in my setup and share the results. If possible could you share with me the Gluster-version you have in your setup?
I have executed the test case on latest master branch
Is there an update to this issue ? We also see this occuring on one of our gluster setups
Description of problem: I have built up a test environment to check long running tests are OK by our applications, but we have detected "No space left on device" after a while. After investigating the problem I found that the number of available inodes is constantly decreasing after each file creation and deletion. And the same is true for the directory creation and deletion.
I have tested by XFS and EXT4 filesystems as well, the result is the same!
Can you help me if something is set wrong or is it really a bug?
The exact command to reproduce the issue:
Tested on a freshly created volume! Volume is created and started by the following commands:
After this volume has been mounted to /mnt/gfs_test_vol01 (fstab entry: localhost:/gfs_test_vol01 /mnt/gfs_test_vol01 glusterfs defaults,_netdev,backupvolfile-server=node-b 0 0)
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 284 65252 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 284 65252 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 284 65252 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 284 65252 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 284 65252 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 284 65252 1% /mnt/gfs_test_vol01
Create a file by the mountpoint:
echo "datatext" > testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Delete the file by the mountpoint:
rm -f testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 286 65250 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 286 65250 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 286 65250 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 286 65250 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 286 65250 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 286 65250 1% /mnt/gfs_test_vol01
Create a file by the mountpoint:
echo "datatext" > testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
Delete the file by the mountpoint:
rm -f testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 287 65249 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 287 65249 1% /mnt/gfs_test_vol01
Create a file by the mountpoint:
echo "datatext" > testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 289 65247 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 289 65247 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 289 65247 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 289 65247 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 289 65247 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 289 65247 1% /mnt/gfs_test_vol01
Delete the file by the mountpoint:
rm -f testfile01.txt
Avaliable inode numbers on Node-A: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-B: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
Avaliable inode numbers on Node-C: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdc1 65536 288 65248 1% /data/glusterfs/test_vol01/brick localhost:/gfs_test_vol01 65536 288 65248 1% /mnt/gfs_test_vol01
At the end the directory is empty:
root@node-a:/mnt/gfs_test_vol01# ls -la /mnt/gfs_test_vol01/ total 8 drwxr-xr-x 4 root root 4096 May 19 12:38 . drwxr-xr-x 4 root root 4096 May 19 12:27 ..
Free inodes before and after the test and it never recovers:
65252 (before) > 65248 (after)
Expected results: Free up all inodes as it needed: Value of after (65248) must be the same as value of before (65252)
Mandatory info: - The output of the
gluster volume info
command:Volume Name: gfs_test_vol01 Type: Distributed-Replicate Volume ID: f28c482b-d4a1-4ae3-8928-932cd30cc551 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: node-a:/data/glusterfs/test_vol01/brick/brick1 Brick2: node-b:/data/glusterfs/test_vol01/brick/brick2 Brick3: node-c:/data/glusterfs/test_vol01/brick/brick3 Options Reconfigured: cluster.granular-entry-heal: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off
- The output of the
gluster volume status
command:root@node-a:~# gluster volume status Status of volume: gfs_test_vol01 Gluster process TCP Port RDMA Port Online Pid
Brick node-a:/data/glusterfs /test_vol01/brick/brick1 53515 0 Y 24913 Brick node-b:/data/glusterfs /test_vol01/brick/brick2 56548 0 Y 5800 Brick node-c:/data/glusterfs /test_vol01/brick/brick3 52789 0 Y 5684 Self-heal Daemon on localhost N/A N/A Y 878 Self-heal Daemon on node-b N/A N/A Y 820 Self-heal Daemon on node-c N/A N/A Y 819
Task Status of Volume gfs_test_vol01
There are no active volume tasks
- The output of the
gluster volume heal
command:Launching heal operation to perform index self heal on volume gfs_test_vol01 has been successful Use heal info commands to check status.
Brick node-a:/data/glusterfs/test_vol01/brick/brick1 Status: Connected Number of entries: 0
Brick node-b:/data/glusterfs/test_vol01/brick/brick2 Status: Connected Number of entries: 0
Brick node-c:/data/glusterfs/test_vol01/brick/brick3 Status: Connected Number of entries: 0
**- Provide logs present on following locations of client and server nodes - /var/log/glusterfs/
data-glusterfs-test_vol01-brick-brick1.log glfsheal-gfs_test_vol01.log glusterd.log glustershd.log mnt-gfs_test_vol01.log
- The operating system / glusterfs version:
root@node-a:/mnt/gfs_test_vol01# cat /proc/version Linux version 5.10.0-23-amd64 (debian-kernel@lists.debian.org) (gcc-10 (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP Debian 5.10.179-1 (2023-05-12)
root@node-a:~# glusterfs --version glusterfs 11.0 Repository revision: git://git.gluster.org/glusterfs.git