gluster / gdeploy

gdeploy - an Ansible based tool to deploy GlusterFS
GNU General Public License v3.0
91 stars 69 forks source link

Brick directory is not created #424

Open mbukatov opened 7 years ago

mbukatov commented 7 years ago

When I use [backed-setup] feature to setup gluster bricks, brick directories are not created.

Version

gdeploy-2.0.2-7.noarch (from sac-gdeploy copr)

Steps to Reproduce

  1. Create trusted storage pool out of few clean CentOS 7 machines
  2. Create gluster config file to setup gluster bricks there:
$ cat gluster_volume.conf
[hosts]
mbukatov-usm1-gl1.example.com
mbukatov-usm1-gl2.example.com
mbukatov-usm1-gl3.example.com
mbukatov-usm1-gl4.example.com

[backend-setup]
devices=vdb,vdc
vgs=vg_gluster_1,vg_gluster_2
pools=pool_gluster_1,pool_gluster_2
lvs=lv_gluster_1,lv_gluster_2
mountpoints=/mnt/glusterbrick_1,/mnt/glusterbrick_2
brick_dirs=/mnt/glusterbrick_1/1,/mnt/glusterbrick_2/2
  1. Run gdeploy: gdeploy -c gluster_volume.conf
  2. Inspect the storage configuration on the gluster servers

Actual Result

The lvm is configured as expected:

[root@mbukatov-usm1-gl1 ~]# lsblk /dev/vdb /dev/vdc
NAME                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdb                                   253:16   0    1T  0 disk 
├─vg_gluster_1-pool_gluster_1_tmeta   252:0    0 15.8G  0 lvm  
│ └─vg_gluster_1-pool_gluster_1-tpool 252:4    0 1008G  0 lvm  
│   ├─vg_gluster_1-pool_gluster_1     252:6    0 1008G  0 lvm  
│   └─vg_gluster_1-lv_gluster_1       252:7    0 1008G  0 lvm  /mnt/glusterbrick_1
└─vg_gluster_1-pool_gluster_1_tdata   252:2    0 1008G  0 lvm  
  └─vg_gluster_1-pool_gluster_1-tpool 252:4    0 1008G  0 lvm  
    ├─vg_gluster_1-pool_gluster_1     252:6    0 1008G  0 lvm  
    └─vg_gluster_1-lv_gluster_1       252:7    0 1008G  0 lvm  /mnt/glusterbrick_1
vdc                                   253:32   0    1T  0 disk 
├─vg_gluster_2-pool_gluster_2_tmeta   252:1    0 15.8G  0 lvm  
│ └─vg_gluster_2-pool_gluster_2-tpool 252:5    0 1008G  0 lvm  
│   ├─vg_gluster_2-pool_gluster_2     252:8    0 1008G  0 lvm  
│   └─vg_gluster_2-lv_gluster_2       252:9    0 1008G  0 lvm  /mnt/glusterbrick_2
└─vg_gluster_2-pool_gluster_2_tdata   252:3    0 1008G  0 lvm  
  └─vg_gluster_2-pool_gluster_2-tpool 252:5    0 1008G  0 lvm  
    ├─vg_gluster_2-pool_gluster_2     252:8    0 1008G  0 lvm  
    └─vg_gluster_2-lv_gluster_2       252:9    0 1008G  0 lvm  /mnt/glusterbrick_2

But I don't see any brick directory:

# tree /mnt/
/mnt/
├── glusterbrick_1
└── glusterbrick_2

2 directories, 0 files

Expected Results

Brick directories has been created as well:

# tree /mnt/
/mnt/
├── glusterbrick_1
│   └── 1
└── glusterbrick_2
    └── 2

4 directories, 0 files
sac commented 7 years ago

@mbukatov the current design is the brick_dirs are passed to gluster as is. And gluster takes care of creating brick directories. gdeploy creates only mount points.

The reason is, since gluster already takes care of creating brick directories we did not want to do the redundant work.

mbukatov commented 7 years ago

@sac Ok, thanks for the explanation. But I'm still little puzzled about the meaning of brick_dirs= option in backend setup. What is it's meaning now when gluster handles this job? Should we remove it from the documentation? Also I would expect gluster to create those directories during setup of volume (gluster is now aware of any brick until one specify which bricks are part of a volume), right?

sac commented 7 years ago

@mbukatov you are right about brick_dirs= option being a confusion in backend_setup section. So, initially we had brick_dirs in backend-setup and when [volume] section was mentioned it would use this information to create a volume.

And as the project evolved we moved the brick_dirs to [volume] section and left the variable in [backend-setup] too to be backward compatible. However, the mistake was we did not set any deadline to remove the brick_dirs from backend-setup, and it stayed on. That is the story behind that variable.

We will keep this issue open till we deprecate that variable and eventually remove it.