Open edrock200 opened 1 year ago
Hi,
After checking the logs i have found two issues.
1) As per client logs it seems the client is not able to connect with local gluster port (24007), After checking the glusterd logs i have not found anything suspicious in the logs.
[2023-03-10 00:43:24.110602 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:27.111100 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:30.111589 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:33.112050 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:36.112491 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:36.112564 +0000] I [glusterfsd-mgmt.c:2774:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: mediafunsan9.xyz [2023-03-10 00:43:39.113031 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:42.113507 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:45.114060 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:48.114548 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:51.115093 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:54.115610 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:43:57.116129 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:44:00.116628 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:44:03.117205 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:44:06.117836 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:44:09.118332 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:44:12.118867 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:44:15.119383 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:44:18.119834 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:44:18.119875 +0000] I [glusterfsd-mgmt.c:2811:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2023-03-10 00:44:21.120315 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2023-03-10 00:44:24.120804 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument)
You can debug the same with tcpdump or strace. What is the MTU size configured for network intrface?
2) The other issue is client(fuse) crash that's during write. Can you please share the "thread apply all bt full" after attaching a core with gdb into your environment. In case if core is not saved then please save the core_path to the file /proc/sys/kernel/core_pattern and create the core_path directory and wait for getting a crash.
st_atim.tv_nsec 1 package-string: glusterfs 11.0 /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x214db)[0x7f087fa3e4db] /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x31e)[0x7f087fa44e6e] /lib/x86_64-linux-gnu/libc.so.6(+0x3ef10)[0x7f087edb9f10] /usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4(libc_calloc+0x9a)[0x7f087f18812a] /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_calloc+0x7d)[0x7f087fa6178d] /usr/lib/x86_64-linux-gnu/glusterfs/11.0/xlator/mount/fuse.so(+0x57ef)[0x7f087c6ed7ef] /usr/lib/x86_64-linux-gnu/glusterfs/11.0/xlator/mount/fuse.so(+0x1c87f)[0x7f087c70487f] /usr/lib/x86_64-linux-gnu/glusterfs/11.0/xlator/mount/fuse.so(+0x1abd3)[0x7f087c702bd3] /usr/lib/x86_64-linux-gnu/glusterfs/11.0/xlator/mount/fuse.so(+0x20bc6)[0x7f087c708bc6] /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db)[0x7f087f3be6db] /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f087ee9c61f]
Thanks, Mohit Agrawal
Thank you for looking into this. Admittedly I'm not familiar with doing the captures but will do some google foo. Mtu is 1500. @mohit84
@mohit84 just to add a little more info, based on your response above I did a tail on mnt-admin.log and saw this entry generate about once a second:
[2023-03-10 23:41:18.750382 +0000] E [socket.c:3393:socket_connect] 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument)
Then ran a telnet port test which was successful:
# telnet 127.0.0.1 24007 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'.
Running an lsof greping 24007 shows glusterfs having numerous connections established to 24007, from itself to itself (the node.) Most are from the actual nic IP to the nic IP, but there are a handful established from the loopback to the loopback as well.
glfs_ecsh 6981 7190 root 9u IPv4 2211819864 0t0 TCP 127.0.0.1:49059->127.0.0.1:24007 (ESTABLISHED)
I have also confirmed this is happening on every single node.
I then performed a rolling reboot on all nodes and it cleared. I have done this before though so I suspect it's temporary.
As we can see in logs the client is continuously throwing messages every 3s difference because the client is not getting a response from glusterd on the port 24007. Before connecting with a brick process the client get a brick port from glusterd and to get a brick port it sends a request to the glusterd. This was the situation before restarting a client, now the client is successfully connected so you are not facing this issue.
[2023-03-10 00:45:02.748548 +0000] I [MSGID: 100030] [glusterfsd.c:2874:main] 0-/usr/sbin/glusterfs: Started running version [{arg=/usr/sbin/glusterfs}, {version=11.0}, {cmdlinestr=/usr/sbin/glusterfs --process-name fuse --volfile-server=mediafunsan6.xyz --volfile-id=/ssd /mnt/admin}]
For the time being, we should focus on client crash. Please share the "thread apply all bt full" after attach a coredump with gdb while a client will crash again.
Understood. Thanks again @mohit84 Do you think this is related to rebalance causing bricks to go offline as well or too early to tell? Either way I will work on those captures this week. For now I have made the log download link above restricted. If you need it again please let me know. Appreciate the time and expertise! I'll update again here when I have captures.
@mohit84 please pardon my ignorance. I've done some googling and think I've got this down but want to make sure I've got it right. Just to clarify, you want me to do a gdb dump during a crash, on a gluster server node or client side? If server side, does this look correct?
gdb glusterfsd 2168 -ex "thread apply all bt" -ex "attach" > ~/dump
where 2168 is the pid of the glusterfsd process, and leave it running until it crashes?
Anyone who can offer guidance on this, I'd be greatly appreciative.
@mohit84 please pardon my ignorance. I've done some googling and think I've got this down but want to make sure I've got it right. Just to clarify, you want me to do a gdb dump during a crash, on a gluster server node or client side? If server side, does this look correct?
gdb glusterfsd 2168 -ex "thread apply all bt" -ex "attach" > ~/dump
where 2168 is the pid of the glusterfsd process, and leave it running until it crashes?
The client(fuse) process is crashing so you need to pass pid of the fuse client process nit a brick process(glusterfsd).
Sorry to ask, @amarts : Is this bug, that is preventing me and others (at least in addition of @edrock200 also @vvarga007) to upgrade from 10.4 to 11.0, already fixed or being fixed for 11.1 ?
Refering to: https://github.com/gluster/glusterfs/discussions/3813#discussioncomment-5217447
@beat I will get back after checking. Was bit away from the project for a while. Once I check the stock of things, will update here.
Hi @amarts, any news on this bug reported a year ago ? :-)
Description of problem: Recently upgraded to gluster 11. Ever since, bricks seem to randomly go offline during heavy write operations and/or rebalances. When querying the port, its accessible and available from other nodes. Restarting glusterd allows the bricks to reconnect, but happens again during heavy writes/rebalances. I haven't been able to get through a complete rebalance successfully under gluster v11. All servers and clients are at v11. Today, when some of the bricks dropped on one volume, all clients disconnected as well. I'm attacking the logs for one of those nodes. I have 2 volumes, one that consists of platter drives, and another that consists of NVME drives. Today, the NVME bricks dropped, however this issue has occurred on both volumes. This issue did not occur with v10.3
Attempting to copy the 1.6GB zip log attached to this ticket caused the mount to disconnect, but no bricks went offline. So I refreshed the zip file to include the mount disconnect.
The exact command to reproduce the issue: execute a rebalance with Gluster v11
The full output of the command that failed: n/a
Expected results: gluster bricks stay connected under heavy writes/rebalance operations.
Mandatory info: - The output of the
gluster volume info
command:Volume Name: gluster_shared_storage Type: Distributed-Replicate Volume ID: 5f79e077-cf03-4344-b799-bfeeed840f1b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: san10:/var/lib/glusterd/ss_brick Brick2: san8:/var/lib/glusterd/ss_brick Brick3: san9:/var/lib/glusterd/ss_brick Options Reconfigured: server.outstanding-rpc-limit: 32 cluster.granular-entry-heal: on storage.fips-mode-rchecksum: on transport.address-family: inet performance.client-io-threads: off cluster.enable-shared-storage: enable
Volume Name: media Type: Distributed-Disperse Volume ID: 61a368b5-c96e-4b6d-8601-6f979f4f88af Status: Started Snapshot Count: 0 Number of Bricks: 30 x (4 + 1) = 150 Transport-type: tcp Bricks: Brick1: san6:/data/brick1/mediafun Brick2: san7:/data/brick1/mediafun Brick3: san8:/data/brick1/mediafun Brick4: san9:/data/brick1/mediafun Brick5: san10:/data/brick1/mediafun Brick6: san6:/data/brick2/mediafun Brick7: san7:/data/brick2/mediafun Brick8: san8:/data/brick2/mediafun Brick9: san9:/data/brick2/mediafun Brick10: san10:/data/brick2/mediafun Brick11: san6:/data/brick3/mediafun Brick12: san7:/data/brick3/mediafun Brick13: san8:/data/brick3/mediafun Brick14: san9:/data/brick3/mediafun Brick15: san10:/data/brick3/mediafun Brick16: san6:/data/brick4/mediafun Brick17: san7:/data/brick4/mediafun Brick18: san8:/data/brick4/mediafun Brick19: san9:/data/brick4/mediafun Brick20: san10:/data/brick4/mediafun Brick21: san6:/data/brick5/mediafun Brick22: san7:/data/brick5/mediafun Brick23: san8:/data/brick5/mediafun Brick24: san9:/data/brick5/mediafun Brick25: san10:/data/brick5/mediafun Brick26: san6:/data/brick6/mediafun Brick27: san7:/data/brick6/mediafun Brick28: san8:/data/brick6/mediafun Brick29: san9:/data/brick6/mediafun Brick30: san10:/data/brick6/mediafun Brick31: san6:/data/brick7/mediafun Brick32: san7:/data/brick7/mediafun Brick33: san8:/data/brick7/mediafun Brick34: san9:/data/brick7/mediafun Brick35: san10:/data/brick7/mediafun Brick36: san6:/data/brick8/mediafun Brick37: san7:/data/brick8/mediafun Brick38: san8:/data/brick8/mediafun Brick39: san9:/data/brick8/mediafun Brick40: san10:/data/brick8/mediafun Brick41: san6:/data/brick9/mediafun Brick42: san7:/data/brick9/mediafun Brick43: san8:/data/brick9/mediafun Brick44: san9:/data/brick9/mediafun Brick45: san10:/data/brick9/mediafun Brick46: san6:/data/brick10/mediafun Brick47: san7:/data/brick10/mediafun Brick48: san8:/data/brick10/mediafun Brick49: san9:/data/brick10/mediafun Brick50: san10:/data/brick10/mediafun Brick51: san1:/data/brick1/mediafun Brick52: san4:/data/brick1/mediafun Brick53: san5:/data/brick1/mediafun Brick54: san11:/data/brick1/mediafun Brick55: san12:/data/brick1/mediafun Brick56: san1:/data/brick2/mediafun Brick57: san4:/data/brick2/mediafun Brick58: san5:/data/brick2/mediafun Brick59: san11:/data/brick2/mediafun Brick60: san12:/data/brick2/mediafun Brick61: san1:/data/brick3/mediafun Brick62: san4:/data/brick3/mediafun Brick63: san5:/data/brick3/mediafun Brick64: san11:/data/brick3/mediafun Brick65: san12:/data/brick3/mediafun Brick66: san1:/data/brick4/mediafun Brick67: san4:/data/brick4/mediafun Brick68: san5:/data/brick4/mediafun Brick69: san11:/data/brick4/mediafun Brick70: san12:/data/brick4/mediafun Brick71: san1:/data/brick5/mediafun Brick72: san4:/data/brick5/mediafun Brick73: san5:/data/brick5/mediafun Brick74: san11:/data/brick5/mediafun Brick75: san12:/data/brick5/mediafun Brick76: san1:/data/brick6/mediafun Brick77: san4:/data/brick6/mediafun Brick78: san5:/data/brick6/mediafun Brick79: san11:/data/brick6/mediafun Brick80: san12:/data/brick6/mediafun Brick81: san1:/data/brick7/mediafun Brick82: san4:/data/brick7/mediafun Brick83: san5:/data/brick7/mediafun Brick84: san11:/data/brick7/mediafun Brick85: san12:/data/brick7/mediafun Brick86: san1:/data/brick8/mediafun Brick87: san4:/data/brick8/mediafun Brick88: san5:/data/brick8/mediafun Brick89: san11:/data/brick8/mediafun Brick90: san12:/data/brick8/mediafun Brick91: san1:/data/brick9/mediafun Brick92: san4:/data/brick9/mediafun Brick93: san5:/data/brick9/mediafun Brick94: san11:/data/brick9/mediafun Brick95: san12:/data/brick9/mediafun Brick96: san1:/data/brick10/mediafun Brick97: san4:/data/brick10/mediafun Brick98: san5:/data/brick10/mediafun Brick99: san11:/data/brick10/mediafun Brick100: san12:/data/brick10/mediafun Brick101: san13:/data/brick1/mediafun Brick102: san14:/data/brick1/mediafun Brick103: san15:/data/brick1/mediafun Brick104: san16:/data/brick1/mediafun Brick105: san17:/data/brick1/mediafun Brick106: san13:/data/brick2/mediafun Brick107: san14:/data/brick2/mediafun Brick108: san15:/data/brick2/mediafun Brick109: san16:/data/brick2/mediafun Brick110: san17:/data/brick2/mediafun Brick111: san13:/data/brick3/mediafun Brick112: san14:/data/brick3/mediafun Brick113: san15:/data/brick3/mediafun Brick114: san16:/data/brick3/mediafun Brick115: san17:/data/brick3/mediafun Brick116: san13:/data/brick4/mediafun Brick117: san14:/data/brick4/mediafun Brick118: san15:/data/brick4/mediafun Brick119: san16:/data/brick4/mediafun Brick120: san17:/data/brick4/mediafun Brick121: san13:/data/brick5/mediafun Brick122: san14:/data/brick5/mediafun Brick123: san15:/data/brick5/mediafun Brick124: san16:/data/brick5/mediafun Brick125: san17:/data/brick5/mediafun Brick126: san13:/data/brick6/mediafun Brick127: san14:/data/brick6/mediafun Brick128: san15:/data/brick6/mediafun Brick129: san16:/data/brick6/mediafun Brick130: san17:/data/brick6/mediafun Brick131: san13:/data/brick7/mediafun Brick132: san14:/data/brick7/mediafun Brick133: san15:/data/brick7/mediafun Brick134: san16:/data/brick7/mediafun Brick135: san17:/data/brick7/mediafun Brick136: san13:/data/brick8/mediafun Brick137: san14:/data/brick8/mediafun Brick138: san15:/data/brick8/mediafun Brick139: san16:/data/brick8/mediafun Brick140: san17:/data/brick8/mediafun Brick141: san13:/data/brick9/mediafun Brick142: san14:/data/brick9/mediafun Brick143: san15:/data/brick9/mediafun Brick144: san16:/data/brick9/mediafun Brick145: san17:/data/brick9/mediafun Brick146: san13:/data/brick10/mediafun Brick147: san14:/data/brick10/mediafun Brick148: san15:/data/brick10/mediafun Brick149: san16:/data/brick10/mediafun Brick150: san17:/data/brick10/mediafun Options Reconfigured: cluster.data-self-heal-algorithm: diff storage.build-pgfid: on disperse.other-eager-lock: off cluster.use-anonymous-inode: yes disperse.eager-lock: off storage.health-check-interval: 120 server.manage-gids: off disperse.stripe-cache: 10 cluster.heal-timeout: 500 disperse.self-heal-window-size: 4 cluster.self-heal-window-size: 4 cluster.self-heal-readdir-size: 2KB disperse.read-policy: gfid-hash config.brick-threads: 0 config.client-threads: 16 network.compression: off network.compression.mem-level: -1 cluster.read-hash-mode: 3 cluster.min-free-disk: 5% server.outstanding-rpc-limit: 32 performance.cache-capability-xattrs: on performance.least-prio-threads: 1 performance.force-readdirp: off dht.force-readdirp: false cluster.readdir-optimize: on performance.readdir-ahead: off disperse.shd-max-threads: 4 disperse.background-heals: 4 disperse.shd-wait-qlength: 2048 server.event-threads: 16 cluster.weighted-rebalance: off cluster.lookup-unhashed: auto performance.read-ahead: off performance.flush-behind: on cluster.lookup-optimize: on client.event-threads: 8 performance.client-io-threads: on transport.address-family: inet storage.fips-mode-rchecksum: on features.cache-invalidation: on features.cache-invalidation-timeout: 60 performance.stat-prefetch: on performance.cache-invalidation: on performance.md-cache-timeout: 60 network.inode-lru-limit: 200000 performance.cache-samba-metadata: off performance.parallel-readdir: on performance.nl-cache: on performance.nl-cache-timeout: 60 performance.nl-cache-positive-entry: enable performance.qr-cache-timeout: 60 performance.cache-size: 8GB performance.cache-max-file-size: 256MB performance.cache-refresh-timeout: 60 performance.rda-cache-limit: 1GB performance.io-thread-count: 16 performance.write-behind-window-size: 1GB performance.io-cache: on performance.write-behind: on performance.open-behind: on performance.quick-read: on features.bitrot: on features.scrub: Active features.scrub-throttle: normal features.scrub-freq: monthly cluster.rebal-throttle: normal performance.quick-read-cache-timeout: 60 performance.quick-read-cache-size: 256MB performance.md-cache-statfs: on performance.md-cache-pass-through: off performance.io-cache-pass-through: off performance.nl-cache-pass-through: off features.signer-threads: 4 cluster.enable-shared-storage: enable
Volume Name: ssd Type: Distributed-Disperse Volume ID: eb825bfd-4880-4826-ac7a-2d3ac60ab48a Status: Started Snapshot Count: 0 Number of Bricks: 3 x (4 + 1) = 15 Transport-type: tcp Bricks: Brick1: san6:/data/nvme/mediafun Brick2: san7:/data/nvme/mediafun Brick3: san8:/data/nvme/mediafun Brick4: san9:/data/nvme/mediafun Brick5: san10:/data/nvme/mediafun Brick6: san1:/data/nvme/mediafun Brick7: san4:/data/nvme/mediafun Brick8: san5:/data/nvme/mediafun Brick9: san11:/data/nvme/mediafun Brick10: san12:/data/nvme/mediafun Brick11: san13:/data/nvme/mediafun Brick12: san14:/data/nvme/mediafun Brick13: san15:/data/nvme/mediafun Brick14: san16:/data/nvme/mediafun Brick15: san17:/data/nvme/mediafun Options Reconfigured: cluster.data-self-heal-algorithm: diff cluster.rebal-throttle: normal features.scrub-throttle: normal features.scrub-freq: monthly features.scrub: Active features.bitrot: on performance.client-io-threads: on client.event-threads: 4 cluster.lookup-optimize: on performance.flush-behind: on performance.read-ahead: off cluster.lookup-unhashed: auto cluster.weighted-rebalance: off server.event-threads: 4 disperse.shd-wait-qlength: 2048 disperse.background-heals: 4 disperse.shd-max-threads: 4 performance.readdir-ahead: off cluster.readdir-optimize: on dht.force-readdirp: off performance.force-readdirp: off performance.least-prio-threads: 1 performance.cache-capability-xattrs: on cluster.min-free-disk: 5% cluster.read-hash-mode: 3 config.client-threads: 0 config.brick-threads: 0 disperse.read-policy: gfid-hash cluster.self-heal-readdir-size: 2KB cluster.self-heal-window-size: 4 disperse.self-heal-window-size: 4 cluster.heal-timeout: 500 disperse.stripe-cache: 10 storage.health-check-interval: 120 disperse.eager-lock: off cluster.use-anonymous-inode: yes disperse.other-eager-lock: off storage.build-pgfid: on server.outstanding-rpc-limit: 32 transport.address-family: inet storage.fips-mode-rchecksum: on features.cache-invalidation: on features.cache-invalidation-timeout: 60 performance.stat-prefetch: on performance.cache-invalidation: on performance.md-cache-timeout: 600 network.inode-lru-limit: 200000 performance.cache-samba-metadata: off performance.parallel-readdir: on performance.nl-cache: on performance.nl-cache-timeout: 60 performance.nl-cache-positive-entry: enable performance.qr-cache-timeout: 60 performance.cache-size: 8GB performance.cache-max-file-size: 256MB performance.cache-refresh-timeout: 60 performance.rda-cache-limit: 1GB performance.io-thread-count: 4 performance.write-behind-window-size: 128MB performance.io-cache: on performance.write-behind: on performance.open-behind: on performance.quick-read: on performance.quick-read-cache-timeout: 60 performance.quick-read-cache-size: 256MB performance.md-cache-statfs: on performance.md-cache-pass-through: off performance.io-cache-pass-through: off performance.nl-cache-pass-through: off cluster.enable-shared-storage: enable
- The output of the
gluster volume status
command: Status of volume: gluster_shared_storage Gluster process TCP Port RDMA Port Online PidBrick san10:/var/lib/glusterd/s s_brick 59731 0 Y 2022 Brick san8:/var/lib/glusterd/ss _brick 60609 0 Y 1947 Brick san9:/var/lib/glusterd/ss _brick 50983 0 Y 1864 Self-heal Daemon on localhost N/A N/A Y 14462 Self-heal Daemon on san1 N/A N/A Y 19561 Self-heal Daemon on san17 N/A N/A Y 27055 Self-heal Daemon on san10 N/A N/A Y 30232 Self-heal Daemon on san7 N/A N/A Y 20379 Self-heal Daemon on san16 N/A N/A Y 24481 Self-heal Daemon on san9 N/A N/A Y 26722 Self-heal Daemon on san14 N/A N/A Y 19813 Self-heal Daemon on san11 N/A N/A Y 29084 Self-heal Daemon on san13 N/A N/A Y 10809 Self-heal Daemon on san12 N/A N/A Y 793
Self-heal Daemon on san4 N/A N/A Y 1520 Self-heal Daemon on san8 N/A N/A Y 7662 Self-heal Daemon on san5 N/A N/A Y 19696 Self-heal Daemon on san15 N/A N/A Y 27000
Task Status of Volume gluster_shared_storage
There are no active volume tasks
Status of volume: media Gluster process TCP Port RDMA Port Online Pid
Brick san6:/data/brick1/mediafu n 52690 0 Y 2184 Brick san7:/data/brick1/mediafu n 57167 0 Y 2905 Brick san8:/data/brick1/mediafu n 51004 0 Y 1974 Brick san9:/data/brick1/mediafu n 57138 0 Y 1946 Brick san10:/data/brick1/mediaf un 57688 0 Y 2037 Brick san6:/data/brick2/mediafu n 51315 0 Y 2221 Brick san7:/data/brick2/mediafu n 51640 0 Y 2997 Brick san8:/data/brick2/mediafu n 51615 0 Y 2012 Brick san9:/data/brick2/mediafu n 56637 0 Y 1984 Brick san10:/data/brick2/mediaf un 57500 0 Y 2110 Brick san6:/data/brick3/mediafu n 52138 0 Y 2317 Brick san7:/data/brick3/mediafu n 50083 0 Y 3100 Brick san8:/data/brick3/mediafu n 55934 0 Y 2101 Brick san9:/data/brick3/mediafu n 58504 0 Y 2061 Brick san10:/data/brick3/mediaf un 58333 0 Y 2169 Brick san6:/data/brick4/mediafu n 56790 0 Y 2503 Brick san7:/data/brick4/mediafu n 59792 0 Y 3180 Brick san8:/data/brick4/mediafu n 55482 0 Y 2142 Brick san9:/data/brick4/mediafu n 56484 0 Y 2126 Brick san10:/data/brick4/mediaf un 60847 0 Y 2237 Brick san6:/data/brick5/mediafu n 49181 0 Y 2577 Brick san7:/data/brick5/mediafu n 56106 0 Y 3342 Brick san8:/data/brick5/mediafu n 51879 0 Y 2219 Brick san9:/data/brick5/mediafu n 60126 0 Y 6790 Brick san10:/data/brick5/mediaf un 56175 0 Y 2301 Brick san6:/data/brick6/mediafu n 51573 0 Y 2626 Brick san7:/data/brick6/mediafu n 60011 0 Y 3395 Brick san8:/data/brick6/mediafu n 58762 0 Y 2276 Brick san9:/data/brick6/mediafu n 54593 0 Y 2234 Brick san10:/data/brick6/mediaf un 51580 0 Y 2377 Brick san6:/data/brick7/mediafu n 49734 0 Y 2669 Brick san7:/data/brick7/mediafu n 49703 0 Y 3435 Brick san8:/data/brick7/mediafu n 52051 0 Y 2334 Brick san9:/data/brick7/mediafu n 56352 0 Y 2321 Brick san10:/data/brick7/mediaf un 52324 0 Y 2425 Brick san6:/data/brick8/mediafu n 52622 0 Y 2705 Brick san7:/data/brick8/mediafu n 57102 0 Y 3475 Brick san8:/data/brick8/mediafu n 51383 0 Y 2395 Brick san9:/data/brick8/mediafu n 49936 0 Y 2370 Brick san10:/data/brick8/mediaf un 57904 0 Y 2491 Brick san6:/data/brick9/mediafu n 59482 0 Y 2741 Brick san7:/data/brick9/mediafu n 54904 0 Y 3511 Brick san8:/data/brick9/mediafu n 53369 0 Y 2445 Brick san9:/data/brick9/mediafu n 56832 0 Y 2429 Brick san10:/data/brick9/mediaf un 52241 0 Y 2575 Brick san6:/data/brick10/mediaf un 54848 0 Y 2786 Brick san7:/data/brick10/mediaf un 60038 0 Y 3548 Brick san8:/data/brick10/mediaf un 55758 0 Y 2529 Brick san9:/data/brick10/mediaf un 56235 0 Y 2502 Brick san10:/data/brick10/media fun 59674 0 Y 2649 Brick san1:/data/brick1/mediafu n 56322 0 Y 1825 Brick san4:/data/brick1/mediafu n 58073 0 Y 1877 Brick san5:/data/brick1/mediafu n 60207 0 Y 1822 Brick san11:/data/brick1/mediaf un 56618 0 Y 1793 Brick san12:/data/brick1/mediaf un 59826 0 Y 1879 Brick san1:/data/brick2/mediafu n 60108 0 Y 1872 Brick san4:/data/brick2/mediafu n 60542 0 Y 1939 Brick san5:/data/brick2/mediafu n 51492 0 Y 1898 Brick san11:/data/brick2/mediaf un 59701 0 Y 28816 Brick san12:/data/brick2/mediaf un 57955 0 Y 1916 Brick san1:/data/brick3/mediafu n 49796 0 Y 1915 Brick san4:/data/brick3/mediafu n 59922 0 Y 1996 Brick san5:/data/brick3/mediafu n 58161 0 Y 1934 Brick san11:/data/brick3/mediaf un 51591 0 Y 1924 Brick san12:/data/brick3/mediaf un 53038 0 Y 1966 Brick san1:/data/brick4/mediafu n 59568 0 Y 1971 Brick san4:/data/brick4/mediafu n 49971 0 Y 2058 Brick san5:/data/brick4/mediafu n 51598 0 Y 1970 Brick san11:/data/brick4/mediaf un 56484 0 Y 1960 Brick san12:/data/brick4/mediaf un 52413 0 Y 2021 Brick san1:/data/brick5/mediafu n 60489 0 Y 2049 Brick san4:/data/brick5/mediafu n 49649 0 Y 2115 Brick san5:/data/brick5/mediafu n 57794 0 Y 2006 Brick san11:/data/brick5/mediaf un 56453 0 Y 1996 Brick san12:/data/brick5/mediaf un 57392 0 Y 2071 Brick san1:/data/brick6/mediafu n 51308 0 Y 2111 Brick san4:/data/brick6/mediafu n 58450 0 Y 2168 Brick san5:/data/brick6/mediafu n 59031 0 Y 2042 Brick san11:/data/brick6/mediaf un 50940 0 Y 2033 Brick san12:/data/brick6/mediaf un 60825 0 Y 2133 Brick san1:/data/brick7/mediafu n 53730 0 Y 2164 Brick san4:/data/brick7/mediafu n 54654 0 Y 2226 Brick san5:/data/brick7/mediafu n 53959 0 Y 2078 Brick san11:/data/brick7/mediaf un 54143 0 Y 2069 Brick san12:/data/brick7/mediaf un 53844 0 Y 2192 Brick san1:/data/brick8/mediafu n 51471 0 Y 2231 Brick san4:/data/brick8/mediafu n 54971 0 Y 2282 Brick san5:/data/brick8/mediafu n 59758 0 Y 2114 Brick san11:/data/brick8/mediaf un 54511 0 Y 2105 Brick san12:/data/brick8/mediaf un 55075 0 Y 2259 Brick san1:/data/brick9/mediafu n 57742 0 Y 2285 Brick san4:/data/brick9/mediafu n 58553 0 Y 2328 Brick san5:/data/brick9/mediafu n 59683 0 Y 2150 Brick san11:/data/brick9/mediaf un 58547 0 Y 2141 Brick san12:/data/brick9/mediaf un 52261 0 Y 2313 Brick san1:/data/brick10/mediaf un 54086 0 Y 2345 Brick san4:/data/brick10/mediaf un 49517 0 Y 2404 Brick san5:/data/brick10/mediaf un 60995 0 Y 2186 Brick san11:/data/brick10/media fun 49972 0 Y 2177 Brick san12:/data/brick10/media fun 56438 0 Y 2383 Brick san13:/data/brick1/mediaf un 60185 0 Y 1963 Brick san14:/data/brick1/mediaf un 59711 0 Y 1778 Brick san15:/data/brick1/mediaf un 60364 0 Y 2206 Brick san16:/data/brick1/mediaf un 51231 0 Y 1699 Brick san17:/data/brick1/mediaf un 56388 0 Y 1816 Brick san13:/data/brick2/mediaf un 55814 0 Y 1999 Brick san14:/data/brick2/mediaf un 59704 0 Y 1858 Brick san15:/data/brick2/mediaf un 56364 0 Y 2245 Brick san16:/data/brick2/mediaf un 52013 0 Y 1735 Brick san17:/data/brick2/mediaf un 56286 0 Y 1890 Brick san13:/data/brick3/mediaf un 59672 0 Y 2035 Brick san14:/data/brick3/mediaf un 58928 0 Y 1934 Brick san15:/data/brick3/mediaf un 59769 0 Y 2281 Brick san16:/data/brick3/mediaf un 58066 0 Y 1771 Brick san17:/data/brick3/mediaf un 54198 0 Y 1954 Brick san13:/data/brick4/mediaf un 53325 0 Y 2071 Brick san14:/data/brick4/mediaf un 58983 0 Y 1982 Brick san15:/data/brick4/mediaf un 55425 0 Y 2317 Brick san16:/data/brick4/mediaf un 53056 0 Y 1807 Brick san17:/data/brick4/mediaf un 57433 0 Y 2034 Brick san13:/data/brick5/mediaf un 50322 0 Y 2154 Brick san14:/data/brick5/mediaf un 50436 0 Y 2019 Brick san15:/data/brick5/mediaf un 58349 0 Y 2353 Brick san16:/data/brick5/mediaf un 60366 0 Y 1843 Brick san17:/data/brick5/mediaf un 53573 0 Y 2095 Brick san13:/data/brick6/mediaf un 49719 0 Y 4224 Brick san14:/data/brick6/mediaf un 54150 0 Y 2056 Brick san15:/data/brick6/mediaf un 50975 0 Y 2389 Brick san16:/data/brick6/mediaf un 55046 0 Y 1879 Brick san17:/data/brick6/mediaf un 53174 0 Y 2237 Brick san13:/data/brick7/mediaf un 57018 0 Y 2245 Brick san14:/data/brick7/mediaf un 59457 0 Y 2092 Brick san15:/data/brick7/mediaf un 54906 0 Y 2425 Brick san16:/data/brick7/mediaf un 59888 0 Y 1915 Brick san17:/data/brick7/mediaf un 53965 0 Y 2286 Brick san13:/data/brick8/mediaf un 57491 0 Y 2311 Brick san14:/data/brick8/mediaf un 49543 0 Y 2128 Brick san15:/data/brick8/mediaf un 52154 0 Y 2461 Brick san16:/data/brick8/mediaf un 53872 0 Y 1951 Brick san17:/data/brick8/mediaf un 56705 0 Y 2324 Brick san13:/data/brick9/mediaf un 52966 0 Y 2351 Brick san14:/data/brick9/mediaf un 56455 0 Y 2177 Brick san15:/data/brick9/mediaf un 52239 0 Y 2497 Brick san16:/data/brick9/mediaf un 60914 0 Y 1987 Brick san17:/data/brick9/mediaf un 50124 0 Y 2360 Brick san13:/data/brick10/media fun 51915 0 Y 2399 Brick san14:/data/brick10/media fun 55858 0 Y 2355 Brick san15:/data/brick10/media fun 55915 0 Y 2533 Brick san16:/data/brick10/media fun 54891 0 Y 2023 Brick san17:/data/brick10/media fun 56786 0 Y 2396 Self-heal Daemon on localhost N/A N/A Y 14462 Bitrot Daemon on localhost N/A N/A Y 14235 Scrubber Daemon on localhost N/A N/A Y 14254 Self-heal Daemon on san8 N/A N/A Y 7662 Bitrot Daemon on san8 N/A N/A Y 7421 Scrubber Daemon on san8 N/A N/A Y 7432 Self-heal Daemon on san9 N/A N/A Y 26722 Bitrot Daemon on san9 N/A N/A Y 26524 Scrubber Daemon on san9 N/A N/A Y 26535 Self-heal Daemon on san13 N/A N/A Y 10809 Bitrot Daemon on san13 N/A N/A Y 10662 Scrubber Daemon on san13 N/A N/A Y 10679 Self-heal Daemon on san11 N/A N/A Y 29084 Bitrot Daemon on san11 N/A N/A Y 28854 Scrubber Daemon on san11 N/A N/A Y 28865 Self-heal Daemon on san5 N/A N/A Y 19696 Bitrot Daemon on san5 N/A N/A Y 19533 Scrubber Daemon on san5 N/A N/A Y 19544 Self-heal Daemon on san1 N/A N/A Y 19561 Bitrot Daemon on san1 N/A N/A Y 19395 Scrubber Daemon on san1 N/A N/A Y 19406 Self-heal Daemon on san7 N/A N/A Y 20379 Bitrot Daemon on static.248.131.108.65.clie nts.your-server.de N/A N/A Y 20357 Scrubber Daemon on static.248.131.108.65.cl ients.your-server.de N/A N/A Y 20368 Self-heal Daemon on san17 N/A N/A Y 27055 Bitrot Daemon on san17 N/A N/A Y 26840 Scrubber Daemon on san17 N/A N/A Y 26856 Self-heal Daemon on san10 N/A N/A Y 30232 Bitrot Daemon on san10 N/A N/A Y 30003 Scrubber Daemon on san10 N/A N/A Y 30023 Self-heal Daemon on san14 N/A N/A Y 19813 Bitrot Daemon on san14 N/A N/A Y 19645 Scrubber Daemon on san14 N/A N/A Y 19663 Self-heal Daemon on san15 N/A N/A Y 27000 Bitrot Daemon on san15 N/A N/A Y 26852 Scrubber Daemon on san15 N/A N/A Y 26863 Self-heal Daemon on san16 N/A N/A Y 24481 Bitrot Daemon on san16 N/A N/A Y 24297 Scrubber Daemon on san16 N/A N/A Y 24336 Self-heal Daemon on san4 N/A N/A Y 1520 Bitrot Daemon on san4 N/A N/A Y 1337 Scrubber Daemon on san4 N/A N/A Y 1349 Self-heal Daemon on san12 N/A N/A Y 793
Bitrot Daemon on san12 N/A N/A Y 589
Scrubber Daemon on san12 N/A N/A Y 601
Task Status of Volume media
There are no active volume tasks
Status of volume: ssd Gluster process TCP Port RDMA Port Online Pid
Brick san6:/data/nvme/ 50197 0 Y 31733 Brick san7:/data/nvme/ 60495 0 Y 3601 Brick san8:/data/nvme/ 54441 0 Y 7395 Brick san9:/data/nvme/ 59801 0 Y 15195 Brick san10:/data/nvme/ 56991 0 Y 2715 Brick san1:/data/nvme/ 60646 0 Y 27414 Brick san4:/data/nvme/ 52634 0 Y 25324 Brick san5:/data/nvme/ 57506 0 Y 2222 Brick san11:/data/nvme/ 54873 0 Y 28252 Brick san12:/data/nvme/ 53379 0 Y 21300 Brick san13:/data/nvme/ 59905 0 Y 24841 Brick san14:/data/nvme/ 53034 0 Y 19618 Brick san15:/data/nvme/ 56747 0 Y 26826 Brick san16:/data/nvme/ 57279 0 Y 18720 Brick san17:/data/nvme/ 51459 0 Y 26810 Self-heal Daemon on localhost N/A N/A Y 14462 Bitrot Daemon on localhost N/A N/A Y 14235 Scrubber Daemon on localhost N/A N/A Y 14254 Self-heal Daemon on san12 N/A N/A Y 793
Bitrot Daemon on san12 N/A N/A Y 589
Scrubber Daemon on san12 N/A N/A Y 601
Self-heal Daemon on san17 N/A N/A Y 27055 Bitrot Daemon on san17 N/A N/A Y 26840 Scrubber Daemon on san17 N/A N/A Y 26856 Self-heal Daemon on san13 N/A N/A Y 10809 Bitrot Daemon on san13 N/A N/A Y 10662 Scrubber Daemon on san13 N/A N/A Y 10679 Self-heal Daemon on san8 N/A N/A Y 7662 Bitrot Daemon on san8 N/A N/A Y 7421 Scrubber Daemon on san8 N/A N/A Y 7432 Self-heal Daemon on san1 N/A N/A Y 19561 Bitrot Daemon on san1 N/A N/A Y 19395 Scrubber Daemon on san1 N/A N/A Y 19406 Self-heal Daemon on san14 N/A N/A Y 19813 Bitrot Daemon on san14 N/A N/A Y 19645 Scrubber Daemon on san14 N/A N/A Y 19663 Self-heal Daemon on san10 N/A N/A Y 30232 Bitrot Daemon on san10 N/A N/A Y 30003 Scrubber Daemon on san10 N/A N/A Y 30023 Self-heal Daemon on san16 N/A N/A Y 24481 Bitrot Daemon on san16 N/A N/A Y 24297 Scrubber Daemon on san16 N/A N/A Y 24336 Self-heal Daemon on san11 N/A N/A Y 29084 Bitrot Daemon on san11 N/A N/A Y 28854 Scrubber Daemon on san11 N/A N/A Y 28865 Self-heal Daemon on san7 N/A N/A Y 20379 Bitrot Daemon on san7 N/A N/A Y 20357 Scrubber Daemon on san7 N/A N/A Y 20368 Self-heal Daemon on san15 N/A N/A Y 27000 Bitrot Daemon on san15 N/A N/A Y 26852 Scrubber Daemon on san15 N/A N/A Y 26863 Self-heal Daemon on san9 N/A N/A Y 26722 Bitrot Daemon on san9 N/A N/A Y 26524 Scrubber Daemon on san9 N/A N/A Y 26535 Self-heal Daemon on san5 N/A N/A Y 19696 Bitrot Daemon on san5 N/A N/A Y 19533 Scrubber Daemon on san5 N/A N/A Y 19544 Self-heal Daemon on san4 N/A N/A Y 1520 Bitrot Daemon on san4 N/A N/A Y 1337 Scrubber Daemon on san4 N/A N/A Y 1349
Task Status of Volume ssd
There are no active volume tasks
- The output of the
gluster volume heal
command: Volume media:Brick san6:/data/brick1/ Status: Connected Number of entries: 0
Brick san7:/data/brick1/ Status: Connected Number of entries: 0
Brick san8:/data/brick1/ Status: Connected Number of entries: 0
Brick san9:/data/brick1/ Status: Connected Number of entries: 0
Brick san10:/data/brick1/ Status: Connected Number of entries: 0
Brick san6:/data/brick2/ Status: Connected Number of entries: 0
Brick san7:/data/brick2/ Status: Connected Number of entries: 0
Brick san8:/data/brick2/ Status: Connected Number of entries: 0
Brick san9:/data/brick2/ Status: Connected Number of entries: 0
Brick san10:/data/brick2/ Status: Connected Number of entries: 0
Brick san6:/data/brick3/ Status: Connected Number of entries: 0
Brick san7:/data/brick3/ Status: Connected Number of entries: 0
Brick san8:/data/brick3/ Status: Connected Number of entries: 0
Brick san9:/data/brick3/ Status: Connected Number of entries: 0
Brick san10:/data/brick3/ Status: Connected Number of entries: 0
Brick san6:/data/brick4/ Status: Connected Number of entries: 0
Brick san7:/data/brick4/ Status: Connected Number of entries: 0
Brick san8:/data/brick4/ Status: Connected Number of entries: 0
Brick san9:/data/brick4/ Status: Connected Number of entries: 0
Brick san10:/data/brick4/ Status: Connected Number of entries: 0
Brick san6:/data/brick5/ Status: Connected Number of entries: 0
Brick san7:/data/brick5/ Status: Connected Number of entries: 0
Brick san8:/data/brick5/ Status: Connected Number of entries: 0
Brick san9:/data/brick5/ Status: Connected Number of entries: 0
Brick san10:/data/brick5/ Status: Connected Number of entries: 0
Brick san6:/data/brick6/ Status: Connected Number of entries: 0
Brick san7:/data/brick6/ Status: Connected Number of entries: 0
Brick san8:/data/brick6/ Status: Connected Number of entries: 0
Brick san9:/data/brick6/ Status: Connected Number of entries: 0
Brick san10:/data/brick6/ Status: Connected Number of entries: 0
Brick san6:/data/brick7/ Status: Connected Number of entries: 0
Brick san7:/data/brick7/ Status: Connected Number of entries: 0
Brick san8:/data/brick7/ Status: Connected Number of entries: 0
Brick san9:/data/brick7/ Status: Connected Number of entries: 0
Brick san10:/data/brick7/ Status: Connected Number of entries: 0
Brick san6:/data/brick8/ Status: Connected Number of entries: 0
Brick san7:/data/brick8/ Status: Connected Number of entries: 0
Brick san8:/data/brick8/ Status: Connected Number of entries: 0
Brick san9:/data/brick8/ Status: Connected Number of entries: 0
Brick san10:/data/brick8/ Status: Connected Number of entries: 0
Brick san6:/data/brick9/ Status: Connected Number of entries: 0
Brick san7:/data/brick9/ Status: Connected Number of entries: 0
Brick san8:/data/brick9/ Status: Connected Number of entries: 0
Brick san9:/data/brick9/ Status: Connected Number of entries: 0
Brick san10:/data/brick9/ Status: Connected Number of entries: 0
Brick san6:/data/brick10/ Status: Connected Number of entries: 0
Brick san7:/data/brick10/ Status: Connected Number of entries: 0
Brick san8:/data/brick10/ Status: Connected Number of entries: 0
Brick san9:/data/brick10/ Status: Connected Number of entries: 0
Brick san10:/data/brick10/ Status: Connected Number of entries: 0
Brick san1:/data/brick1/ Status: Connected Number of entries: 0
Brick san4:/data/brick1/ Status: Connected Number of entries: 0
Brick san5:/data/brick1/ Status: Connected Number of entries: 0
Brick san11:/data/brick1/ Status: Connected Number of entries: 0
Brick san12:/data/brick1/ Status: Connected Number of entries: 0
Brick san1:/data/brick2/ Status: Connected Number of entries: 0
Brick san4:/data/brick2/ Status: Connected Number of entries: 0
Brick san5:/data/brick2/ Status: Connected Number of entries: 0
Brick san11:/data/brick2/ Status: Connected Number of entries: 0
Brick san12:/data/brick2/ Status: Connected Number of entries: 0
Brick san1:/data/brick3/ Status: Connected Number of entries: 0
Brick san4:/data/brick3/ Status: Connected Number of entries: 0
Brick san5:/data/brick3/ Status: Connected Number of entries: 0
Brick san11:/data/brick3/ Status: Connected Number of entries: 0
Brick san12:/data/brick3/ Status: Connected Number of entries: 0
Brick san1:/data/brick4/ Status: Connected Number of entries: 0
Brick san4:/data/brick4/ Status: Connected Number of entries: 0
Brick san5:/data/brick4/ Status: Connected Number of entries: 0
Brick san11:/data/brick4/ Status: Connected Number of entries: 0
Brick san12:/data/brick4/ Status: Connected Number of entries: 0
Brick san1:/data/brick5/ Status: Connected Number of entries: 0
Brick san4:/data/brick5/ Status: Connected Number of entries: 0
Brick san5:/data/brick5/ Status: Connected Number of entries: 0
Brick san11:/data/brick5/ Status: Connected Number of entries: 0
Brick san12:/data/brick5/ Status: Connected Number of entries: 0
Brick san1:/data/brick6/ Status: Connected Number of entries: 0
Brick san4:/data/brick6/ Status: Connected Number of entries: 0
Brick san5:/data/brick6/ Status: Connected Number of entries: 0
Brick san11:/data/brick6/ Status: Connected Number of entries: 0
Brick san12:/data/brick6/ Status: Connected Number of entries: 0
Brick san1:/data/brick7/ Status: Connected Number of entries: 0
Brick san4:/data/brick7/ Status: Connected Number of entries: 0
Brick san5:/data/brick7/ Status: Connected Number of entries: 0
Brick san11:/data/brick7/ Status: Connected Number of entries: 0
Brick san12:/data/brick7/ Status: Connected Number of entries: 0
Brick san1:/data/brick8/ Status: Connected Number of entries: 0
Brick san4:/data/brick8/ Status: Connected Number of entries: 0
Brick san5:/data/brick8/ Status: Connected Number of entries: 0
Brick san11:/data/brick8/ Status: Connected Number of entries: 0
Brick san12:/data/brick8/ Status: Connected Number of entries: 0
Brick san1:/data/brick9/ Status: Connected Number of entries: 0
Brick san4:/data/brick9/ Status: Connected Number of entries: 0
Brick san5:/data/brick9/ Status: Connected Number of entries: 0
Brick san11:/data/brick9/ Status: Connected Number of entries: 0
Brick san12:/data/brick9/ Status: Connected Number of entries: 0
Brick san1:/data/brick10/ Status: Connected Number of entries: 0
Brick san4:/data/brick10/ Status: Connected Number of entries: 0
Brick san5:/data/brick10/ Status: Connected Number of entries: 0
Brick san11:/data/brick10/ Status: Connected Number of entries: 0
Brick san12:/data/brick10/ Status: Connected Number of entries: 0
Brick san13:/data/brick1/ Status: Connected Number of entries: 0
Brick san14:/data/brick1/ Status: Connected Number of entries: 0
Brick san15:/data/brick1/ Status: Connected Number of entries: 0
Brick san16:/data/brick1/ Status: Connected Number of entries: 0
Brick san17:/data/brick1/ Status: Connected Number of entries: 0
Brick san13:/data/brick2/ Status: Connected Number of entries: 0
Brick san14:/data/brick2/ Status: Connected Number of entries: 0
Brick san15:/data/brick2/ Status: Connected Number of entries: 0
Brick san16:/data/brick2/ Status: Connected Number of entries: 0
Brick san17:/data/brick2/ Status: Connected Number of entries: 0
Brick san13:/data/brick3/ Status: Connected Number of entries: 0
Brick san14:/data/brick3/ Status: Connected Number of entries: 0
Brick san15:/data/brick3/ Status: Connected Number of entries: 0
Brick san16:/data/brick3/ Status: Connected Number of entries: 0
Brick san17:/data/brick3/ Status: Connected Number of entries: 0
Brick san13:/data/brick4/ Status: Connected Number of entries: 0
Brick san14:/data/brick4/ Status: Connected Number of entries: 0
Brick san15:/data/brick4/ Status: Connected Number of entries: 0
Brick san16:/data/brick4/ Status: Connected Number of entries: 0
Brick san17:/data/brick4/ Status: Connected Number of entries: 0
Brick san13:/data/brick5/ Status: Connected Number of entries: 0
Brick san14:/data/brick5/ Status: Connected Number of entries: 0
Brick san15:/data/brick5/ Status: Connected Number of entries: 0
Brick san16:/data/brick5/ Status: Connected Number of entries: 0
Brick san17:/data/brick5/ Status: Connected Number of entries: 0
Brick san13:/data/brick6/ Status: Connected Number of entries: 0
Brick san14:/data/brick6/ Status: Connected Number of entries: 0
Brick san15:/data/brick6/ Status: Connected Number of entries: 0
Brick san16:/data/brick6/ Status: Connected Number of entries: 0
Brick san17:/data/brick6/ Status: Connected Number of entries: 0
Brick san13:/data/brick7/ Status: Connected Number of entries: 0
Brick san14:/data/brick7/ Status: Connected Number of entries: 0
Brick san15:/data/brick7/ Status: Connected Number of entries: 0
Brick san16:/data/brick7/ Status: Connected Number of entries: 0
Brick san17:/data/brick7/ Status: Connected Number of entries: 0
Brick san13:/data/brick8/ Status: Connected Number of entries: 0
Brick san14:/data/brick8/ Status: Connected Number of entries: 0
Brick san15:/data/brick8/ Status: Connected Number of entries: 0
Brick san16:/data/brick8/ Status: Connected Number of entries: 0
Brick san17:/data/brick8/ Status: Connected Number of entries: 0
Brick san13:/data/brick9/ Status: Connected Number of entries: 0
Brick san14:/data/brick9/ Status: Connected Number of entries: 0
Brick san15:/data/brick9/ Status: Connected Number of entries: 0
Brick san16:/data/brick9/ Status: Connected Number of entries: 0
Brick san17:/data/brick9/ Status: Connected Number of entries: 0
Brick san13:/data/brick10/ Status: Connected Number of entries: 0
Brick san14:/data/brick10/ Status: Connected Number of entries: 0
Brick san15:/data/brick10/ Status: Connected Number of entries: 0
Brick san16:/data/brick10/ Status: Connected Number of entries: 0
Brick san17:/data/brick10/ Status: Connected Number of entries: 0
Volume ssd:
Brick san6:/data/nvme/ Status: Connected Number of entries: 0
Brick san7:/data/nvme/ Status: Connected Number of entries: 0
Brick san8:/data/nvme/ Status: Connected Number of entries: 0
Brick san9:/data/nvme/ Status: Connected Number of entries: 0
Brick san10:/data/nvme/ Status: Connected Number of entries: 0
Brick san1:/data/nvme/ Status: Connected Number of entries: 0
Brick san4:/data/nvme/ Status: Connected Number of entries: 0
Brick san5:/data/nvme/ Status: Connected Number of entries: 0
Brick san11:/data/nvme/ Status: Connected Number of entries: 0
Brick san12:/data/nvme/ Status: Connected Number of entries: 0
Brick san13:/data/nvme/ Status: Connected Number of entries: 0
Brick san14:/data/nvme/ Status: Connected Number of entries: 0
Brick san15:/data/nvme/ Status: Connected Number of entries: 0
Brick san16:/data/nvme/ Status: Connected Number of entries: 0
Brick san17:/data/nvme/ Status: Connected Number of entries: 0
- Provide logs present on following locations of client and server nodes - /var/log/glusterfs/ https://drive.google.com/file/d/11DaHa-_1v_zJBRSLXAwBHxMiNHPWjpvB/view?usp=sharing - Is there any crash ? Provide the backtrace and coredump
Additional info: Thanks in advance for any assistance.
- The operating system / glusterfs version: Ubuntu 18.04 Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration