freedomofpress / securedrop-client

a Qt-based GUI for SecureDrop journalists 📰🗞️
GNU Affero General Public License v3.0
41 stars 39 forks source link

client fails to send replies in qubes when using run.sh #483

Closed sssoleileraaa closed 5 years ago

sssoleileraaa commented 5 years ago

Description

When you run the client via the dev script in the dev environment on qubes, the client fails to encrypt and send replies. This happens whether or not you specify sdc-home to be the ~/.securedrop-client directory, which contains config.json.

STR

  1. Create the virtual environment and pip install dev-requirements.txt
  2. Execute ./run.sh --sdc-home ~/.securedrop-client
  3. Send a reply

Expected

Replies are blue in the client showing that they have encrypted and sent.

Actual

Replies are red and error log says we're using the wrong journalist key.

redshiftzero commented 5 years ago

are you making any modifications at all to run.sh? Are you removing --no-proxy to the client invokation? Asking because run.sh won't use split-gpg or the qubes RPC proxy (i.e. we have a lot of non-Qubes specific logic in run.sh and should probably either write another script to invoke the client on Qubes or just have people run python -m securedrop_client directly after creating the database)

sssoleileraaa commented 5 years ago

are you making any modifications at all to run.sh? Are you removing --no-proxy to the client invokation?

Yes, I have --no-proxy removed when running the client in Qubes

run.sh won't use split-gpg

I see, so we won't be able to use the key stored in gpg on sd-gpg then.

either write another script to invoke the client on Qubes or just have people run python -m securedrop_client directly after creating the database

I'd like to try running python -m securedrop_client first to see if that works, but I prefer the separate script idea, a run-in-qubes.sh perhaps, so that we can remove --no-proxy and set the client up to use split-gpg. (Note: To create the db, one could run the run.sh script twice to get around https://github.com/freedomofpress/securedrop-client/issues/482)

What's nice is that so far the only thing that breaks on Qubes when running the client from the dev script is encrypting replies.

sssoleileraaa commented 5 years ago

it might make sense to set up sd-dev to mimic sd-svs at the same time we create a run.sh script for running the client in qubes. it would be great if we could have a target like make dev that copies config.json to sd-dev also. been waiting until you got back to discuss this @redshiftzero, so we could discuss with @conorsch et all tomorrow during standup.

redshiftzero commented 5 years ago

Yeah I like that idea of having a separate run-in-qubes.sh (pretty minimal so not hard to maintain). I think the expectation from the developer should be (this is what we'd document in README.md):

  1. make all has been run. This means that they have most of the prerequisites: a ~/.securedrop_client directory in sd-svs, the submission key in sd-gpg, their RPC proxy points to the onion URL of the JI, etc.
  2. We note that they can manually add a NetVM when needed to sd-svs in order to be able to git clone and push branches, etc. (one doesn't have to do this necessarily i.e. one could qvm-copy your git tree into a network connected VM, but for development it's kind of annoying to have to do this, plus one needs to create a virtualenv).
  3. They create the virtualenv and activate it (dependencies are also installed by the securedrop-client package, but there may be additional dependencies required by the branch we are testing in our dev env, so we should not rely upon this).
  4. They run run-in-qubes.sh which runs alembic upgrade head and then invokes the client (without --no-proxy).

I think this is probably easier than adding an additional VM as then we complicate our salt provisioning, however open to discussing further (adding an sd-dev VM means no step 2 is needed, otherwise from the developer's perspective things are the same)

sssoleileraaa commented 5 years ago

Whether or not we use sd-dev or sd-svs, we need this run-in-qubes.sh script to solve the problem of replies failing to be encrypted on Qubes.

They run run-in-qubes.sh which runs alembic upgrade head and then invokes the client (without --no-proxy).

I like it! Eventually we may want to modify run.sh so that we tell it whether or not to use the dev journalist key or one in our home dir, but I like that creating a new script helps prevent us from messing up our env for non-qubes development.

sssoleileraaa commented 5 years ago

one could qvm-copy your git tree into a network connected VM, but for development it's kind of annoying to have to do this, plus one needs to create a virtualenv

This process was strange to me when I tried it last week. I added a FileCopy policy to make it so I could checkout the code in sd-dev and move it to sd-svs, but then as you say you need to install virtualenv in order to get dev-dependencies. You could get dev-dependencies from within sd-dev and then copy them over but there are permission issues iwth that I believe. I guess we could try to use pre-installed dependencies on sd-svs... maybe?

sssoleileraaa commented 5 years ago

I think this is probably easier than adding an additional VM as then we complicate our salt provisioning

I realize that I don't know much about salt provisioning and so I probably underestimate the work it would take to create and maintain the dev vm, but I was imagining we would use the same sd-svs template for sd-dev. Then run make dev (after make all) which would create or update sd-dev and copy the journalist key into a ~/.securedrop-client folder on sd-dev.

Hopefully me calling it sd-dev isn't creating confusion since I think many of us call the vm we create in order to get the salt provisioning code into qubes sd-dev. I'm actually purposefully using that name because it would be nice to just have one long-standing vm that we have scripts to maintain things like vm policies, etc. but that we don't re-create each time we make all.

i think some of the benefits of not using sd-svs for client development would be:

  1. It makes it slightly easier for developers to get started in qubes if we set up the dev vm for them.

  2. It leaves sd-svs alone so we know to only use it for testing the debian package. I see us using sd-svs for testing builds and reproducing production bugs.

  3. make test won't fail because of the network permission check on sd-svs (I still need to test this and see this happen)

  4. It's easier to remember that all our code is cloned and checked out in sd-dev (that's if we can use the same vm as the one we create for cloning securedrop-workstation).

sssoleileraaa commented 5 years ago

I'd like to try running python -m securedrop_client first to see if that works, but I prefer the separate script idea, a run-in-qubes.sh

Using python -m securedrop_client works!

And this removes having to remember to edit the run.sh script to remove --no-proxy. I will close #475 but leave this open so we can decide on whether or not we'd like to move forward with creating run-in-qubes script. Without it, one would have to remember to first run the debian package securedrop-client to first create the home dirs and svs.sqlite db in ~/.securedrop_client, and then run the unpackaged code via python -m securedrop_client, I believe.

conorsch commented 5 years ago

Yeah I like that idea of having a separate run-in-qubes.sh (pretty minimal so not hard to maintain).

As mentioned in https://github.com/freedomofpress/securedrop-workstation/issues/288#issuecomment-514343296, we can support Qubes integration in the local client via run.sh with a few tweaks. It's a nit, but we needn't break out into a separate script, either—we can simply infer that Qubes is the host OS based on e.g. the presence of ^QUBES_ env vars. Will submit a POC for @creviera's review, to evaluate whether the changes are genuinely beneficial to client development.