Open cfm opened 6 months ago
Instead of changing behavior based on hostname, we could qubes services instead. The only differences are:
ConditionPathExists=
instead of ConditionHost=
Advantages:
Disadvantages:
This started being tackled a while ago via https://github.com/freedomofpress/securedrop-workstation/pull/840/ and its cousins (https://github.com/freedomofpress/securedrop-builder/pull/396/ and https://github.com/freedomofpress/securedrop-client/pull/1677) .
I can try to bring it back into reviewable state after discussing with @zenmonkeykstop, but first we should converge on a strategy. Should we advanced with the original proposal of forking on hostname or via Qubes services? Whichever way we decide we should at least be consistent and document this practice.
Switching to Qubes services makes sense, @deeplow. Arguably it extends #1001's configuration injection to enabling services by analogy, which I like.
https://github.com/freedomofpress/securedrop-workstation/issues/1004#issuecomment-2095970555:
Disadvantages:
- salt being salt - salt is good at setting state, but not removing it. If we ever needed to disable a qubes service, we'd need to explicitly remove it in the salt configuration
I guess I take this for granted for as long as we're using Salt in this way at all. :-)
I think I lean in the direction of using ConditionHost, because I think it better fits our goal of keeping in-VM stuff in packages and using salt for dom0 things.
As a practical case is if we want to add a new "sd-log-whatever" service in a VM, we'd have to also do a corresponding workstation patch to enable the Qubes service through dom0, and gate the client release on the workstation one.
Re: immune to qube rename side-effects (if they ever happen)
- one consideration could be to set a sd-app
Qubes service on the sd-app VM, sd-log
service on sd-log VM, etc. and then all of the services in VM can use ConditionPathExists=/var/run/qubes-service/<VM name>
, which does feel a little less brittle than the hostname, but also allows for a 1:1 mapping between systemd service and VM. I'm not sure that extra level indirection is worth it though.
What if we instead call them with init
or bootstrap
prefix? (init-sd-app
or bootstrap-sd-app
). In my mind that makes a bit clearer what the service goal is. Because in reality this a bit of an abuse of what services are for.
To clarify, that was just my suggestion if we wanted to work around the "qube rename side-effects" problem; I still prefer ConditionHost.
Because in reality this a bit of an abuse of what services are for.
Agreed.
As a practical case is if we want to add a new "sd-log-whatever" service in a VM, we'd have to also do a corresponding workstation patch to enable the Qubes service through dom0, and gate the client release on the workstation one.
OK. I see now what you mean by an extra level of indirection. Even though this would be at most one service per qube, having this service stated across two repos adds unnecessary release overhead.
The counter-argument is that now the vm-name is set in two different repos. I think in theory within a particular qube it should not care about what it is called by the outside. But that's a wider discussion. So I am fine either way. But other ideas may come up in the meeting we're having later.
But other ideas may come up in the meeting we're having later.
Marek's point about wanting to set it in multiple VMs was pretty convincing to me. In theory we could do something like:
ConditionHost|=sd-app
ConditionHost|=sd-log
But I think that is less clean than the one ConditionPathExists. So, I'm down to move forwards with Qubes services, and if we end up running into problems, we can always revisit/adjust course.
To summarize some of the (new) arguments for the use of services (as opposed to hostnames):
One important detail that Marek noted when implementing these services is to hook them up before the qrexec. This ensures that it's before the user's session and most other things.
From my calculations, the biggest bottleneck to provisioning is the need to provision files in app qubes.
sd-gpg:
- sd-gpg-files:
echo "export QUBES_GPG_AUTOACCEPT=28800" | tee /home/user/.profile
copy sd-journalist.sec
sudo -u user gpg --import /home/user/.gnupg/sd-journalist.sec
sd-app:
- sd-app-config:
- copy config with specific submission key fingerprint (will be via qubesdb after https://github.com/freedomofpress/securedrop-client/pull/1883)
- sd-mime-handling:
ln -s /home/user/.local/share/applications/mimeapps.list /opt/sdw/mimeapps.list.{{ vm_name }}
ln -s /home/user/.mailcap /opt/sdw/mailcap.default
sd-whonix:
- sd-whonix-hidserv-key (will be moved to qubesdb https://github.com/freedomofpress/securedrop-workstation/issues/1013#issuecomment-2088746620)
'sd-fedora-39-dvm,sys-usb':
- match: list
- sd-usb-autoattach-add
sd-viewer:
- sd-mime-handling:
ln -s /home/user/.local/share/applications/mimeapps.list /opt/sdw/mimeapps.list.{{ vm_name }}
ln -s /home/user/.mailcap /opt/sdw/mailcap.default
sd-devices-dvm:
- sd-mime-handling:
ln -s /opt/sdw/mimeapps.list.{{ vm_name }} /home/user/.local/share/applications/mimeapps.
ln -s /home/user/.mailcap /opt/sdw/mailcap.default
sd-proxy:
- sd-proxy-files:
cp sd-proxy.yaml
- sd-mime-handling:
ln -s /home/user/.local/share/applications/mimeapps.list /opt/sdw/mimeapps.list.default
ln -s /home/user/.mailcap /opt/sdw/mailcap.default
Secret-provisioning is what we can't avoid provisioning at the moment, but all else can go into templates / packages and provisioning on-boot. So for now my proposal would be to:
Make disposable + provision via systemd + qubes services:
Provision via systemd + qubes services:
Impact: 4 less qubes that need provisoning with minor code changes.
Make disposable + provision via systemd + qubes services:
- sd-proxy
Once https://github.com/freedomofpress/securedrop-workstation/pull/1035 lands, proxy is fully ready to be disposable! (I'm not sure why it has the mime handling enabled, nothing in that VM should be opening other files...)
Wasn't there mime handling config added in sd-proxy specifically to avoid it opening files?
Wasn't there mime handling config added in sd-proxy specifically to avoid it opening files?
sidebar: istr Marek mentioning a better way to deny this kind of functionality rather than trying to compete with all the places that mime handling could be introduced, and rather than having to specify every filetype, which has been a source of errors for us in the past. qubes-core-agent-linux
does contain some mimetype overriding: it looks like they ship both mime-override
and xdg-override
(look in /usr/share/qubes/
for respective directories) so it would be cool if we could do something similar.
But in any case for the purposes of this PR, I think we could either use the systemd approach that we're planning for other VMs, or just create the symbolic link to the "default" mime handling (which I think is just used for the proxy?) in the deb postinst and then override it in the other vms.
I have move the mime-handling conversation to its own separate issue to keep this one focused how to approach this systemd provisioning in general. I hope that's OK. (I should have created that issue anyways as I did for the logging one).
So for now my proposal would be to:
Make disposable + provision via systemd + qubes services:
- sd-proxy
- sd-devices-dvm
- sd-viewer
Provision via systemd + qubes services:
- sd-app
Duh, I was forgetting that sd-devices and sd-viewer were already disposable. So only sd-proxy can become disposable.
ConditionPathExists
-based systemd service that applies further configuration from QubesDB.
Description
@zenmonkeykstop asked this morning whether #1001 is sufficient for all VM-level configuration, not just keys and values. I think we'll still want to use systemd units with
ConditionHost
conditions to enable individual services based on the hostname configured by Salt (and enforced by dom0 tests).How will this impact SecureDrop/SecureDrop Workstation users?
No user implications.
How would this affect the SecureDrop Workstation threat model?
Along with #1001, this assumes we are comfortable with runtime (boot-time) configuration of VMs' roles and services, except for secrets.
Tasks: