Open gdevenyi opened 1 month ago
The devid
filter selects block groups based on which devices they currently occupy. So your command is asking for everything currently allocated on device 2 to be relocated onto whichever devices are preferred by the target raid profile. For single
profile, this is the device with the most free space, with data distributed across multiple devices when multiple devices have equal free space. Given your statement of the final result, the largest devices were likely devices 1 and 2, since that was where the data ended up.
There currently isn't a good way to move data from all devices to one device while also changing profile without moving the data multiple times. You can alternate between resizing devices 1, 3, 4, and 5 smaller, in 1 GiB increments, so that they have less unallocated space than device 2, then perform some of the conversion to single when device 2 has more unallocated space than the others, then stop the balance when the unallocated space is equal and go back to resizing the other devices smaller again, repeating all of that until all data has been removed from devices 1, 3, 4, 5. This reduces the number of data movements, but it requires a shell for
loop or a small python-btrfs script to control the raw kernel ioctls and handle the switches between resizing and balancing.
If this is a feature request: it's a fairly straightforward patch to disable allocation on some devices (a variant of the existing allocation preferences patch with the "allocate nothing" extension). Once that is merged, then this operation can be performed in two steps:
btrfs balance start -dconvert=single -mconvert=dup /storage
With allocation disabled on devices other than device 2, balance will have no choice but to reallocate all the data there.
Given your statement of the final result, the largest devices were likely devices 1 and 2, since that was where the data ended up.
Yes, this is correct.
If this is a feature request
I guess it is now, since it is not currently possible to "un-balance" data off of disk in preparation for removal. It sounds like the no allocation preferences will address this.
I executed:
btrfs balance start --force -sconvert=single,devid=2 -dconvert=single,devid=2 -mconvert=single,devid=2 /storage
With the intention of moving devices 1,3,4 and 5 from my btrfs filesystem.
The command ran overnight and I found afterwards that device 4 and 5 had been vacated of data, but devices 1 and 2 had equal amounts, although everything was now stored as "single"