Closed WillyPillow closed 4 years ago
It seems sufficient to simply treat an input length of zero as a special case in
admin.vm.volume.Import
, though I am not sure if this is an ideal solution.
No, this error check was introduced specifically to avoid treating truncated input (including 0) as a valid data - in most cases truncated input means something gone wrong on the sending side.
I tried replacing the import call with resizing the volume to zero then back to the old size. However, the relevant qrexec call does not seem to support the shrinking of volumes.
Yes, that should work. There is a call admin.vm.volume.ImportWithSize
that should do it (qvm-volume import --size=0 ...
) and then resize back to the desired size.
Are you trying it on R4.1 or R4.0? On R4.1 there is a ImportWithSize
call that should not have this limitation.
Just to be clear: do not try to resize down to 0 manually. Try to import zero-sized content, using qvm-volume import --size=0
(or "import_data_with_size(..., size=0)` function if you're doing that from python).
Just to be clear: do not try to resize down to 0 manually. Try to import zero-sized content, using
qvm-volume import --size=0
(or "import_data_with_size(..., size=0)` function if you're doing that from python).
Indeed, I originally attempted to do this manually. (Soon realized what you meant and removed the previous comment.) I'll try this out and create a PR later.
Calling import_data_with_size(..., size=0)
yields a permission denied error.
Poked around a bit and it seemed that this was due to https://github.com/QubesOS/qubes-core-admin/blob/master/qubes/api/internal.py#L101 requiring the size to be greater than zero.
Even when that line is removed, the underlying lvcreate
does not seem to support a --virtualsize
of zero. (Nor sizes that are not multiples of 512.)
Using something like the following works, but feels pretty hackish.
proc = subprocess.Popen(['head', '-c', '512', '/dev/zero'], stdout=subprocess.PIPE)
old_size = vm.volumes['private'].size
vm.volumes['private'].import_data_with_size(stream=proc.stdout, size=512)
vm.volumes['private'].resize(old_size)
proc.wait()
Hmm, in this case perhaps the best way is to introduce new call like admin.vm.volume.Erase
(or .Clear
)?
I can think of many other hackish solutions (like sending the full volume size of /dev/zero
) but that doesn't sound nice either.
For lvm, shall we consider that we want to ensure blkdiscard happens when erase/clear of existing volume is invoked (anti-forensics).
Similarly, if resize-to-smaller ever ends up being supported, a similar consideration for the removed portion of a volume may want to be considered.
B
- For lvm, shall we consider that we want to ensure blkdiscard happens when erase/clear of existing volume is invoked (anti-forensics).
My understanding from reading the source code is that this happens automatically as long as I reuse the import mechanism internally, so hopefully, not much extra work needs to be done for this.
Qubes OS version
R4.1
Affected component(s) or functionality
qvm-template-postprocess
Brief summary
reset_private_img
raises the following error:To Reproduce
Reinstall an already-installed template RPM.
Expected behavior
Private volume is cleared.
Actual behavior
The aforementioned error occurs.
Additional context
Ran into this issue while testing the the {reinstall,{down,up}grade} operations of
qvm-template
(QubesOS/qubes-core-admin-client#145).Upon further inspection, it seems that the
admin.vm.volume.Import
call requires the payload stream to be of the same size as the volume, causing the error. (Commit that introduced the check: https://github.com/QubesOS/qubes-core-admin/commit/e9b97e42b1dffe7b1d7317d73fcfa912f724d016)Solutions you've tried
I tried replacing the import call with resizing the volume to zero then back to the old size. However, the relevant qrexec call does not seem to support the shrinking of volumes.
It seems sufficient to simply treat an input length of zero as a special case in
admin.vm.volume.Import
, though I am not sure if this is an ideal solution.Relevant documentation you've consulted
5192
Related, non-duplicate issues
None.