Open Nuitari opened 2 months ago
Number of Bricks: 1 x 9 = 9
Volume type looks something wrong. Did you created the volume with replica count 9? or wanted to create distributed replicate with replica count 3?
Please share the Volume create command used here.
Use the below command to create Distributed Replicate volume with Replica count 3
gluster volume create sharedProd replica 3 \
srv1:/var/brick \
srv2:/var/brick \
srv3:/var/brick \
srv4:/var/brick \
srv5:/var/brick \
srv6:/var/brick \
srv7:/var/brick \
srv8:/var/brick \
srv9:/var/brick
The goal is to have 9 replicas. There is only about 20Gb of data, but we need high availability.
This is not a supported configuration, Only Replica count 2 and 3 are the tested and supported ones. You can explore Disperse volume where you will get the high availability and more storage space with the same number of bricks. For example, Create a volume with 6 data bricks and 3 redundancy bricks. Your volume size will be 6 x size in each brick and the volume will be highly available even if 3 nodes/bricks goes down.
@xhernandez / @pranithk Is it possible to have redundancy count more than data bricks if high availability is more important than storage space?
On Sat, Mar 16, 2024 at 2:39 PM Aravinda VK @.***> wrote:
This is not a supported configuration, Only Replica count 2 and 3 are the tested and supported ones. You can explore Disperse volume where you will get the high availability and more storage space with the same number of bricks. For example, Create a volume with 6 data bricks and 3 redundancy bricks. Your volume size will be 6 x size in each brick and the volume will be highly available even if 3 nodes/bricks goes down.
@xhernandez https://github.com/xhernandez / @pranithk https://github.com/pranithk Is it possible to have redundancy count more than data bricks if high availability is more important than storage space?
No. It's not possible. The number of data bricks is enforced to always be greater than half of the total bricks to have a way to guarantee the quorum. In this case the maximum redundancy configuration would be 5 + 4.
A thing to consider is that dispersed volumes require more computational power to encode/decode the data, and the performance could differ compared to a replicated volume (in some workloads it could be better and in some slower). Some testing should be done to be sure everything is inside the allowed tolerance if they want to go with dispersed volumes.
Xavi
— Reply to this email directly, view it on GitHub https://github.com/gluster/glusterfs/issues/4314#issuecomment-2001990048, or unsubscribe https://github.com/notifications/unsubscribe-auth/AANS6GFPZX6ZJ7EUE6ZG2ETYYRDSHAVCNFSM6AAAAABEW6ZVOOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBRHE4TAMBUHA . You are receiving this because you were mentioned.Message ID: @.***>
We also have a smaller testing environment
Volume Name: shared1
Type: Replicate
Volume ID: 2073f548-b89a-4687-92f6-486ac661750b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: testsrv1:/var/brick
Brick2: testsrv2:/var/brick
Brick3: testsrv3:/var/brick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: true
storage.fips-mode-rchecksum: on
transport.address-family: inet
auth.allow: 10.0.0.0/8
Same problem presentation. All 3 nodes are glusterfs 10.1 on Ubuntu 22.04
Description of problem:
Randomly we'll start getting permission denied errors accompanied by strange mtimes on the fuse mount.
We could not find a way to reproduce the problem, and it happens on directories that has been present for multiple years.
The symptom are always similar in that the Modified Time for the directory is set to some bizarre, inaccurate year:
From the FUSE mount point:
From the Brick folder (independant of the brick)
In the logs we see:
Doing sudo touch resets the timestamp and the directories are now accessible again.
Expected results: Access as a normal user
Mandatory info: - The output of the
gluster volume info
command:- The output of the
gluster volume status
command:- The output of the
gluster volume heal
command:**- Is there any crash ? Provide the backtrace and coredump No crash, no coredumps
- The operating system / glusterfs version: Mix of Ubuntu 20.04 and Ubuntu 22.04 glusterfs 10.1 on Ubuntu 22.04 glusterfs 7.2 on Ubuntu 20.04
The issue happens the same on either versions.
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration