Open brendanhoar opened 5 years ago
When I execute sudo fstrim -av
in a TemplateVM, shut down, restart, and execute sudo fstrim -av
again, it looks like it's not doing anything:
/rw: 1.9 GiB (2029568000 bytes) trimmed
/: 9.8 GiB (10514423808 bytes) trimmed
/rw: 1.9 GiB (2029494272 bytes) trimmed
/: 9.8 GiB (10514403328 bytes) trimmed
I've tried this more than twice. Doesn't seem to make a difference. I've tried restarting the VM between [Edit: Actually, this does make a difference. Executing the trim twice consecutively without restarting results in 0 bytes trimmed the second time.]fstrim
s and not restarting. Also doesn't seem to make a difference.
Is this the expected output, or is something going wrong? It used to be that, eventually, there would be nothing left to trim, but now it's always claiming to have trimmed the same amount no matter how many times the trim is performed.
I think Linux doesn't remember which blocks were trimmed between reboots. What you can do, is to observe whether fstrim
influence disk usage of that VM. If you want to see it immediately, not only after VM shutdown, check sudo lvs qubes_dom0/vm-VMNAME-private-snap
(and similarly -root- snap
) in dom0.
I think Linux doesn't remember which blocks were trimmed between reboots. What you can do, is to observe whether
fstrim
influence disk usage of that VM. If you want to see it immediately, not only after VM shutdown, checksudo lvs qubes_dom0/vm-VMNAME-private-snap
(and similarly-root- snap
) in dom0.
Indeed, I do see a reduction in the Data%
value that way.
I also just edited my previous comment to note that executing the trim twice consecutively without restarting results in 0 bytes trimmed the second time.
However, when I restart the VM, the Data%
value goes back up (not all the way), and when I execute fstrim -av
the first time, the trimmed values are the same as they were the first time in the previous round. (The second time is again 0 bytes.)
All in all, I'm in not sure what to make of this. It still seems like executing fstrim -av
inside my TemplateVMs is not making any practical difference in the end.
(On the other hand, maybe that's because this VM was already trimmed.)
(On the other hand, maybe that's because this VM was already trimmed.)
Correct. You'll want to measure the data use, then perform a lot of create and delete file operations (in particular lots of small files), then measure data use again in between several reboots, then finally do the fstrim, shutdown, measure.
B
The problem you're addressing (if any) In the default Qubes 4.0x configuration, the thinly provisioned pool volumes depend upon the discard mechanism to return unused space in the volumes back to the pool. In addition, the ext4 volumes mounted within the VMs have the discard flag enabled, which allows for the linux VMs to communicate this down to the pool through the block layer commands issued within the VMs.
However, the minimum IO size (and hence the discard granularity) is based on the chunk size of the pool which is never smaller than 64KB. In addition, if the pool itself is on a large physical volume, the chunk size is often larger (128KB on @tasket's system, 256KB on my system). This means that on file deletes that have significant portions stored in contiguous space smaller than the chunk size, fewer (or no) discards are issued, which leads to conservation of the storage within the vm volume even though the space is unused.
Describe the solution you'd like While a smaller chunk size would increase the utility of "discard on delete", @tasket and others have argued that there is a balancing act among the pool chunk size, metadata storage needs and performance.
As a mitigation, I am proposing adding a default fstrim on the standard volumes of Qubes templates and derived VMs upon shutdown. Importantly it should only apply to the ext4 volumes that are configured by default, and not on volumes that are added via the device widget (having manually invoked sudo fstrim -av with a 7.68TB SSD attached...).
One could make a case against exempting disposable VMs, particularly as their volatile storage will (should?) be discarded upon deletion from the pool. However, it's generally a rather quick operation on smallish VM volumes, so it may not be necessary to exempt them.
Where is the value to a user, and who might that user be? All users would benefit from more precise availability of unused space on the device. In addition, for those who have also enabled trim through LUKS into the hardware device, these discards can assist the hardware device's provisioning/management of flash storage as well as provide (with some tradeoffs) additional anti-forensics capabilities (esp. on properly configured SEDs).
Describe alternatives you've considered
Additional context Thread: https://groups.google.com/forum/#!topic/qubes-users/qq_ElNPdx-g
Relevant documentation you've consulted https://www.qubes-os.org/doc/disk-trim/ https://www.qubes-os.org/doc/tips-and-tricks/
Related, non-duplicate issues
5053
5054