Closed CorrosiveTruths closed 8 months ago
Thanks for the report, fixed in devel. Crashing is unintentional and I see the stale groups at the end of the list:
Qgroupid Referenced Exclusive Path
-------- ---------- --------- ----
0/5 16.00KiB 16.00KiB <toplevel>
0/256 16.00KiB 16.00KiB subv1
0/266 16.00KiB 16.00KiB subv11
0/268 16.00KiB 16.00KiB subv13
0/270 16.00KiB 16.00KiB subv15
0/272 16.00KiB 16.00KiB subv17
0/274 16.00KiB 16.00KiB subv19
0/276 16.00KiB 16.00KiB subv21
0/278 16.00KiB 16.00KiB subv23
0/258 16.00KiB 16.00KiB subv3
0/260 16.00KiB 16.00KiB subv5
0/262 16.00KiB 16.00KiB subv7
0/264 16.00KiB 16.00KiB subv9
0/257 16.00KiB 16.00KiB <stale>
0/259 16.00KiB 16.00KiB <stale>
0/261 16.00KiB 16.00KiB <stale>
0/263 16.00KiB 16.00KiB <stale>
0/265 16.00KiB 16.00KiB <stale>
0/267 16.00KiB 16.00KiB <stale>
0/269 16.00KiB 16.00KiB <stale>
0/271 16.00KiB 16.00KiB <stale>
0/273 16.00KiB 16.00KiB <stale>
0/275 16.00KiB 16.00KiB <stale>
0/277 16.00KiB 16.00KiB <stale>
0/279 16.00KiB 16.00KiB <stale>
The subvolumes have been created in order, deleted every third. At this point I'd rather update the documentation regarding the stale qgrups, it's missing there, than to print a hint. This could interefere with processing the output.
I consider this done, if there's anything else please open a new issue.
When subvolumes are deleted and clear-stale hasn't been ran, using qgroup show with --sort=path segfaults with btrfs-progs-6.3.3. Should probably do anything other than crash. Maybe sort them at the top, bottom, ignore them, advise running clear-stale etc.?