QubesOS / qubes-issues

The Qubes OS Project issue tracker
https://www.qubes-os.org/doc/issue-tracking/
541 stars 48 forks source link

Proposal and code for instantaneously started disposable VMs #1512

Open qubesuser opened 8 years ago

qubesuser commented 8 years ago

The problem you're addressing (if any) Disposable vms are very useful for the intent they're made, however one drawback they have is that usage is not instant as other appvms, and one need to wait for it to load before using it (varying from 7-20s depending on hardware)

Describe the solution you'd like It would be great if there was an option to "preload" the dispvm (quantity to preload would be defined by the user and limited by hardware specs) so whenever you need to use a dispvm the target program launches automatically.

Where is the value to a user, and who might that user be? It would be a great benefit in terms of speed and convenience depending on how much the user relies on dispvms

Additional context Currently behaviour would be preserved, if you launch a program for a dispvm using the qubes menu, each call would use a different preloaded dispvm, no reuse would be made. Also when one dispvm is "used", another one would be preloaded to keep the defined amount always ready.


Original description:

Starting disposable VMs is faster than normal VMs, but it can often still take several seconds and be a noticeable delay in the user experience.

This proposes to solve this issue by keeping one or more disposable VMs always around runnning, but without qubes-guid started and thus "invisible".

When the user requests a disposable VMs, the system takes one of those cached disposable VMs, adjusts them if necessary and starts qubes-guid, and then starts another cached disposable VMs for the next request.

This allows instantaneously started DispVMs at the cost of losing 1.5-6 GB of RAM, which can be a good tradeoff at least for machines with >= 16GB RAM.

There are two ways of doing this: the most flexible way would be to support any DispVM usage by starting the appropriate service on the cached DVM, and there is an inflexible but faster way that pre-starts the application as well, but only supports a limited number of DispVM applications started from dom0 (typically a web browser and a terminal).

My code implements the "inflexible" way and offers two modes: a faster "separate" mode that keeps around a DispVM for each configured application, and a slower but less RAM hungry "unified" mode that keeps a DispVM with all the applications running, and kills the ones not needed at user request.

You can find the implementation at: https://github.com/qubesuser/qubes-core-admin/tree/insta_dvm

You'll need to create a configuration file in /etc/qubes/dvms like the one provided in the branch. The mode is chosen automatically depending on available RAM, but can be configured in /etc/qubes/cached-dvm-mode

The branch is missing packaging for qubes-start-cached-dvm and the dvms config file, systemd integration for starting it at boot, and making dom0 start menu entries use it.

It's also somewhat hackish overall and might need a rewrite in Python and adjustment to the new core code if shipped after that.

adrelanos commented 8 years ago

I recommend to also open a pull request against https://github.com/marmarek/qubes-core-admin/.

qubesuser commented 8 years ago

I haven't opened a pull request because I mostly wrote this for myself and while usable by others it's not quite ready to ship since it's missing integration, and I'm not sure if it's worth doing it now as opposed to waiting for the new core code.

marmarek commented 8 years ago

Definitely it is too late for having it in R3.1. So it may go into the next major version. Given the progress on core3, we'll probably skip R3.2 and go straight for R4.0 (with core3). But probably will not manage go implement savefile-based DispVMs there in time. So this approach will be really useful, also for generic AppVMs (have some DispVM running, without any application and use it when requested). As for qubes core API, it would be very similar - lack of savefile would mean that dispvm.start() would take somehow longer.

qubesuser commented 8 years ago

BTW, there are potential anonymity issues because the first actual use of the new VM happens at the same time as the new disposable VM for the next request is started.

This means that they can be correlated if they are both exploited, or from the network if starting a VM causes network traffic correlated with subsequent traffic from actual use (I think this is mitigated by Tor rotating circuits every 10 minutes, but not totally sure).

It may be a good idea to delay attaching a netvm to avoid correlation from network; avoiding uptime correlation might be possible by starting the VM with a fixed wall clock time (e.g. start of Unix epoch) and then keeping it paused and fixing the clock later.

adrelanos commented 8 years ago

This should be mitigated by stream isolation by source IP? (IsolateClientAddr)

shunju commented 6 years ago

I have just started using Qubes on my brandnew laptop and I am amazed by what you guys have been putting together here. I assumed there to be a much steeper learning curve, which is why I have been putting Qubes off until… my old laptop broke.

The delays that I’m faced when opening something in a disposable VM are one of the few things that bother me moderately (the other ones being the heavy use of Fedora, a few GUI bugs that I’ll report after exploring them in some more detail and the huge memory footprint – will upgrade to 32GB RAM soon, never thought 16GB might not be enough for me). I would love to see this feature implemented and am pleased to see it tagged as P:major, although it seems as though it hasn’t made its way to 4.0 as originally planned (using 4.0-rc4 here).

I’m sorry to clutter up the issues page with this, but I really think this needs saying: Qubes rules and you guys are doing some really amazing work here! I’ll never go back to another operating system (used to use plain Debian beforehand). In two months, I’ll have some more money at my hands and intend to donate to the Qubes project regularly.

andrewdavidwong commented 5 years ago

@qubesuser, are you still working on this?

arkenoi commented 3 years ago

Wow, I came with the similar idea but never implemented it. Nice to see someone already tried, it would be a very useful feature!

andrewdavidwong commented 3 years ago

@qubesuser, are you still working on this?

I think it's fair to say that the answer is "no," so if anyone else would like to pick this up, please comment here.

arkenoi commented 3 years ago

Where is the code now? The original link is 404 :(

andrewdavidwong commented 3 years ago

Where is the code now? The original link is 404 :(

I have no idea, sorry. All I know is what's in this public issue. If that was the only copy of the code, it may no longer be available to us. :slightly_frowning_face:

arkenoi commented 2 years ago

That sucks :(( Does anyone have a copy by a chance? I am afraid it would not fit the current code base without some adaptation anyway but we could try at least..

UndeadDevel commented 9 months ago

I offer my humble attempt at implementing something like this: In my qubes-tools repo click on the first Gist link (the repo can be used to cryptographically verify the scripts).

UndeadDevel commented 8 months ago

I want to clarify my previous post, since I've been assigned to this issue: my linked repo (some bash scripts designed for, among others, this use case) was primarily meant to give people something alike what this issue is about, but it's not what I would consider a proper solution. A proper solution would require modifying some core QubesOS code and likely make some design decisions, which is beyond me; e.g. one could change the qrexec policy specification to allow using @tag tags as qrexec policy targets, which could be used for a more proper solution, but there are probably good reasons why that's not possible.

Furthermore, my solution very heavily relies on xdotool, which doesn't work under Wayland, which is where QubesOS is headed (as I understand iGPU acceleration is only targeting Wayland and X11 is considered deprecated, with Wayland support tentatively scheduled to appear in Q4.3). I have doubts that the script(s) could be easily rewritten, retaining functionality, using Wayland tools, but this is just from a very cursory look. I still wrote them anyway because it allowed me to improve my bash skills and because the Q4.3 release date is nebulous, as is the question of whether proper iGPU support will be ready by then.

Bottom line is, while I certainly wouldn't mind a review of my linked scripts, I'm not sure that they are really adequate as a solution to this issue.