I don't know how useful you will find this to be. Feel free to close if you think it does not add value. I only made this because I wanted to learn more about btrfs
When running device remove, the difference is pretty extreme compared to btrfs device usage (2.70TiB vs 10.75TiB). So it could be useful in that situation, to get a better understanding of the btrfs device remove process which people complain about being slow but in reality it seems to be pretty fast; de usage is just not showing the whole picture:
The default mode is pretty quick and it only prints number of device extents which can be useful for planning the order of device remove commands; but for more thorough device introspection I added a --usage flag to iterate over all the device extents
You can compare this to the output of the fs usage-report:
sudo btrfs-usage-report /mnt/do
Btrfs usage report for /mnt/do
Filesystem ID: 8da156d7-abd4-4c84-a2f0-8c17f43eb60c
Mixed groups: False
Total physical space usage:
|
| Total filesystem size: 56.21TiB
| Allocated bytes: 52.86TiB
| Allocatable bytes remaining: 3.35TiB
Target profiles:
|
| type profile
| ---- -------
| System RAID1
| Metadata RAID1
| Data single
Estimated virtual space left for use:
|
| type free
| ---- ----
| Data 29.66TiB
| MetaData 31.73GiB
Virtual space usage by block group type:
|
| type total used
| ---- ----- ----
| Data 52.75TiB 26.42TiB
| System 32.00MiB 6.39MiB
| Metadata 58.00GiB 34.27GiB
Allocated raw disk bytes by chunk type.
|
| flags allocated used parity *)
| ----- --------- ---- ---------
| DATA 52.75TiB 26.42TiB 0.00B
| SYSTEM|RAID1 64.00MiB 12.78MiB 0.00B
| METADATA|RAID1 116.00GiB 68.54GiB 0.00B
|
| *) Parity is a reserved part of the allocated bytes, limiting the
| amount that can be used for data or metadata.
Allocated bytes per device:
|
| devid total size allocated path
| ----- ---------- --------- ----
| 1 16.37TiB 16.11TiB /dev/sda
| 2 12.73TiB 12.47TiB /dev/sdc
| 4 12.73TiB 12.47TiB /dev/sdd
| 5 14.37TiB 11.81TiB /dev/sdh
Allocated bytes per device, split up by chunk type.
|
| Device ID: 1
| | flags allocated parity *)
| | ----- --------- ---------
| | DATA 16.05TiB 0.00B
| | METADATA|RAID1 58.00GiB 0.00B
| | SYSTEM|RAID1 32.00MiB 0.00B
|
| Device ID: 2
| | flags allocated parity *)
| | ----- --------- ---------
| | DATA 12.47TiB 0.00B
|
| Device ID: 4
| | flags allocated parity *)
| | ----- --------- ---------
| | DATA 12.47TiB 0.00B
|
| Device ID: 5
| | flags allocated parity *)
| | ----- --------- ---------
| | DATA 11.75TiB 0.00B
| | METADATA|RAID1 58.00GiB 0.00B
| | SYSTEM|RAID1 32.00MiB 0.00B
|
| *) Parity is a reserved part of the allocated bytes, limiting the
| amount that can be used for data or metadata.
Unallocatable raw disk space:
|
| Reclaimable (by using balance): 0.00B
| Not reclaimable (because of different disk sizes): 0.00B
Unallocatable bytes per device, given current target profiles:
|
| devid soft *) hard **) reclaimable ***)
| ----- ------- -------- ----------------
| 1 0.00B 0.00B 0.00B
| 2 0.00B 0.00B 0.00B
| 4 0.00B 0.00B 0.00B
| 5 0.00B 0.00B 0.00B
|
| *) Because allocations in the filesystem are unbalanced.
| **) Because of having different sizes of devices attached.
| ***) Amount of 'soft' unallocatable space that can be reclaimed,
| before hitting the 'hard' limit.
I'm not sure why there are slightly more device blockgroups than fs chunks:
I don't know how useful you will find this to be. Feel free to close if you think it does not add value. I only made this because I wanted to learn more about btrfs
When running device remove, the difference is pretty extreme compared to
btrfs device usage
(2.70TiB vs 10.75TiB). So it could be useful in that situation, to get a better understanding of thebtrfs device remove
process which people complain about being slow but in reality it seems to be pretty fast;de usage
is just not showing the whole picture:The default mode is pretty quick and it only prints number of device extents which can be useful for planning the order of
device remove
commands; but for more thorough device introspection I added a--usage
flag to iterate over all the device extentsThis is what the output looks like with the
--usage
flag (this part is slow, takes between 20 seconds to 5 mins to run):You can compare this to the output of the fs usage-report:
I'm not sure why there are slightly more device blockgroups than fs chunks: