Open zander opened 6 years ago
I recently had a shift in device mapper IDs that resulted in the wrong persistent LVM volume getting attached to a VM, which was a bit of a surprise, and could be a security problem. If switching to UUIDs involves a lot of work, would it be easy to change LVM volumes from the "dom0:dm-99" to "dom0:escaped--lvname" format instead?
I recently had a similar problem - external to Qubes LVM volumes come up in parallel with VMs and land on different dm.. devices in a race condition with allocation of devices to VMs. Usually causes the VM with the persistent attach to fail to start, nonexistent or inaccessible device. I do wonder if a VM start will wait around for a persistent device to come up? I also noticed that the front device spec is NOT optional for persistent attaches - if there is more than one they all try to grab xvdi and the second one causes the VM start to fail. Startup order does not match entry into dom0 either so the documentation (man entry) needs an update at least when this issue is fixed. Perhaps force provision of front device to "qvm-block at -p".
This actually is on my personal TODO list. I just haven't had time to work on it. If someone is feeling brave enough he can ping me and I will help him solve the issue.
One other related issue - after a reboot there is no way to find out what the current persistent attaches are without trying to start the target appVM and then getting the device name from the error message and detaching it - one by one. This seems to also confuse the tray.devices widget - every time I attempt one of these starts that fails due to an incorrect persistent block device attach the widget dies:
Traceback (most recent call last):
File "/usr/lib64/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib64/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/lib/python3.5/site-packages/qui/tray/devices.py", line 466, in <module>
sys.exit(main())
File "/usr/lib/python3.5/site-packages/qui/tray/devices.py", line 444, in main
dispatcher.listen_for_events()))
File "/usr/lib/python3.5/site-packages/gbulb/glib_events.py", line 736, in run_until_complete
return future.result()
File "/usr/lib64/python3.5/asyncio/futures.py", line 294, in result
raise self._exception
File "/usr/lib64/python3.5/asyncio/tasks.py", line 240, in _step
result = coro.send(None)
File "/usr/lib/python3.5/site-packages/qubesadmin/events/__init__.py", line 126, in listen_for_events
yield from self._listen_for_events(vm)
File "/usr/lib/python3.5/site-packages/qubesadmin/events/__init__.py", line 186, in _listen_for_events
self.handle(subject, event, **kwargs)
File "/usr/lib/python3.5/site-packages/qubesadmin/events/__init__.py", line 220, in handle
handler(subject, event, **kwargs)
File "/usr/lib/python3.5/site-packages/qui/tray/devices.py", line 214, in vm_shutdown
self.device_detached(vm)
File "/usr/lib/python3.5/site-packages/qui/tray/devices.py", line 232, in device_detached
self.get_submenu().dev_detached(vm)
File "/usr/lib/python3.5/site-packages/qui/tray/devices.py", line 118, in dev_detached
menu_item = self.menu_items[vm]
KeyError: <qubesadmin.vm.QubesVM object at 0x7715543d6a20>
Writing here because #4780 is closed and it looks like some work was done on the above post re tray.devices and there is no open issue - I downloaded some current updates for dom0 a couple of days ago and note the tray.devices has been wrapped and restarts itself whenever it dies. The problem is that it never updates its display and I find myself having to kill it to force a refresh whenever I want to use it. The other part is related to qvm-device as this issue is - seems the USB code suffers from the same problem documented in this issue - when sys-usb is restarted the assignment of device label can change if there are many USB controllers and a persistent attach fails because the device is no longer exposed by sys-usb (its device label has changed) - need a symbolic method of specifying attachments (like udev) just as is required for block devices. Device attachments do need a bit more work - for 4.1?
Writing here because #4780 is closed and it looks like some work was done on the above post re tray.devices and there is no open issue
FYI, you can still comment on closed issues. Closed issues will be reopened, if appropriate. Please see our issue reporting guidelines for more details.
Duplicate of #2272
Could someone please re-open this issue and mark it as a security bug? Getting the wrong block device attached to a VM after a reboot is a real problem. A temporary workaround would be to disable the persistent flag to qvm-block until the UUID work (#2272) is completed, or at least issue a warning when running qvm-block attach --persistent
.
Reopening because as @doug1 points out this is in fact a security bug (a qube can get access to data it should not have access to).
will be solved by #9325
Qubes OS version:
Qubes 4.0RC3
Steps to reproduce the behavior:
Using qvm-block shows output like this;
Using qvm-block add with the persistent flag will alter qubes.xml and the 'sdb1' or similar would be the part that is stored in there. This is a design issue because drive numbers change.
In my own setup I used a new drive created inside of an already existing LVM thin-pool and called it 'bulk' (in pool "Slow"), for some reason qvm-block shows this as dm-32. When I rebooted this showed up as dm-40, another time as 35.
Expected behavior:
Most distros switched to using UUID based mounting configuration files. I think its time for Qubes to follow and make sure that in the qubes.xml we no longer have an 'id' attribute that is a not-unique identification number.
Please consider dropping the 'id' and adding the 'uuid' attribute. See
sudo blkid
.Actual behavior:
Starting a qube where its persistent disk ID could not be found gives a traceback in the journal;