Open basic6 opened 7 years ago
Your observation is right, the manual step is now needed while it could be done in one go with the replace step. We'd have to add a new option to do that, but it's basically just calling one more ioctl after successful replace.
Thanks for sharing this. I had the same issue and had no idea I had to apply the resize operation to a device rather than the entire filesystem. That's completely non-intuitive.
Sorry for resurrecting this thread, but my problem is similar. I had 3 x 8 TB disks: devid 1 size 7.28TiB used 7.20TiB path /dev/sdc devid 2 size 7.28TiB used 7.20TiB path /dev/sdg devid 3 size 7.28TiB used 7.20TiB path /dev/sdd
with around 286G free space
Then I
Result: devid 1 size 7.28TiB used 7.20TiB path /dev/sdc devid 2 size 14.55TiB used 7.20TiB path /dev/sde devid 3 size 7.28TiB used 7.20TiB path /dev/sdd So far so good. BUT: df -h shows a strange result:
/dev/sdc 30T 22T 293G 99% /mnt/BTRFS The overall size seems to be good. But, there should be around 286G + 8TB free space, right?
I already started a full balance, but will this help? 3233 out of about 7403 chunks balanced (3234 considered), 56% left
Kernel version: 4.19.0
Any hint appreciated.
@basic6 I send some patches into the btrfs ML to implement this feature, let's hope it gets reviewed soon :)
Sent a v2 of this same patch right now.
This problem is happened even without replacing, if I grow the underlying LVM lv, for example. I have a btrfs raid1 using 2 devices:
# btrfs filesystem show /var/backups/brick/brick-svs/mysql
Label: none uuid: 551b2a31-ecaa-4e74-9051-c2ea388dc2ab
Total devices 2 FS bytes used 640.00KiB
devid 1 size 35.00GiB used 2.03GiB path /dev/mapper/brick--svs--hdd1-b_brick--svs_mysql
devid 2 size 35.00GiB used 2.03GiB path /dev/mapper/brick--svs--hdd2-b_brick--svs_mysql
# df -h /var/backups/brick/brick-svs/mysql
/dev/mapper/brick--svs--hdd1-b_brick--svs_mysql 35G 3.4M 34G 1% /var/backups/brick/brick-svs/mysql
I do growing the first disk /dev/mapper/brick--svs--hdd1-b_brick--svs_mysql
to +5G and resizing btrfs filesystem:
# lvextend -L+5G /dev/brick-svs-hdd1/b_brick-svs_mysql
# btrfs filesystem resize max /var/backups/brick/brick-svs/mysql
# btrfs filesystem show /var/backups/brick/brick-svs/mysql
Label: none uuid: 551b2a31-ecaa-4e74-9051-c2ea388dc2ab
Total devices 2 FS bytes used 640.00KiB
devid 1 size 40.00GiB used 2.03GiB path /dev/mapper/brick--svs--hdd1-b_brick--svs_mysql
devid 2 size 35.00GiB used 2.03GiB path /dev/mapper/brick--svs--hdd2-b_brick--svs_mysql
# df -h /var/backups/brick/brick-svs/mysql
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/brick--svs--hdd1-b_brick--svs_mysql 38G 3.9M 34G 1% /var/backups/brick/brick-svs/mysql
And after resizing second drive btrfs still see device 2 as 35G:
# lvextend -L+5G /dev/brick-svs-hdd2/b_brick-svs_mysql
# btrfs filesystem resize max /var/backups/brick/brick-svs/mysql
# btrfs filesystem show /var/backups/brick/brick-svs/mysql
Label: none uuid: 551b2a31-ecaa-4e74-9051-c2ea388dc2ab
Total devices 2 FS bytes used 640.00KiB
devid 1 size 40.00GiB used 2.03GiB path /dev/mapper/brick--svs--hdd1-b_brick--svs_mysql
devid 2 size 35.00GiB used 2.03GiB path /dev/mapper/brick--svs--hdd2-b_brick--svs_mysql
# df -h /var/backups/brick/brick-svs/mysql
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/brick--svs--hdd1-b_brick--svs_mysql 38G 3.9M 34G 1% /var/backups/brick/brick-svs/mysql
And only passing needed device id manually successfully resize the btrfs filesystem to new size:
# btrfs filesystem resize 2:max /var/backups/brick/brick-svs/mysql
Resize '/var/backups/brick/brick-svs/mysql' of '2:max'
# btrfs filesystem show /var/backups/brick/brick-svs/mysql
Label: none uuid: 551b2a31-ecaa-4e74-9051-c2ea388dc2ab
Total devices 2 FS bytes used 640.00KiB
devid 1 size 40.00GiB used 2.03GiB path /dev/mapper/brick--svs--hdd1-b_brick--svs_mysql
devid 2 size 40.00GiB used 2.03GiB path /dev/mapper/brick--svs--hdd2-b_brick--svs_mysql
# df -h /var/backups/brick/brick-svs/mysql
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/brick--svs--hdd1-b_brick--svs_mysql 40G 3.9M 39G 1% /var/backups/brick/brick-svs/mysql
Will be good to improve this via resizing all drives on resize max
command, because I spend a lot of time to understand why only first drive is resized successfully.
@marcosps, did you create a PR with your implementation? Can't find it in https://github.com/kdave/btrfs-progs/pulls?q=is%3Apr+
@MurzNN the patch wasn't accepted, lack or review or maybe other reason.
Suppose you have a BTRFS RAID1 filesystem with 4 drives, 3 GB each, 6 GB capacity:
After replacing 2 3G drives with 4G drives, you should have a new total capacity of 7 GB...
... but you don't. The filesystem still has its initial capacity of 6 GB instead of the expected 7 GB.
It is only after manually growing the filesystem on each replaced device that you get the full capacity of the drives:
These extra steps should not be necessary. After replacing some drives, it looks like BTRFS is unable to make use of the new capacity when in fact some manual resize commands need to be called.
Tested with btrfs-progs 4.4.