Terraform module for standing up a SecureDrop staging environment at Equinix Metal (fka Packet).
WARNING: Using this and keeping it running will incur costs (see below).
Define a terraform.tfvars
like:
# REQUIRED:
auth_token = "your Equinix Metal API token here"
metro = "two-letter metro code here" # https://metal.equinix.com/developers/api/metros/
project = "name of your configured Equinix Metal project here"
# OPTIONAL:
plan = "if you want something other than c3.small.x86" # https://metal.equinix.com/developers/api/plans/
After you've run terraform init && terraform apply
, you should see your
server's IP address in the output. After cloud-init
has completed, you
can start a session like so:
$ terraform init
[...]
$ terraform apply
[...]
metal_device.sd-staging: Still creating... [2m0s elapsed]
metal_device.sd-staging: Still creating... [2m10s elapsed]
metal_device.sd-staging: Still creating... [2m20s elapsed]
metal_device.sd-staging: Creation complete after 2m22s [id=04baac1e-f733-4a97-8d5e-470aa6d6d483]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
ip_address = "<your IP>"
$ ssh -L 5900:localhost:5902 root@<your IP>
[...]
root@sd-staging:~# virsh list
Id Name State
---------------------------------------------------
1 libvirt-staging-focal_app-staging running
2 libvirt-staging-focal_mon-staging running
4 tails running
root@sd-staging:~# virsh domdisplay tails
127.0.0.1:2
If you used the SSH invocation above, your Tails domain will be available via
Spice at localhost:5900
with the Spice password tails
. You can use a Spice
client like vinagre
(connect using the SPICE protocol).
The app-staging
applications (Source and Journalist Interfaces) will
be reachable via the same instructions used to connect to
any other SecureDrop staging environment.
You can also use this setup as the basis for a "production VM" installation of SecureDrop:
a production installation with all of the system hardening active, but virtualized, rather than running on hardware.
To do so, pick up the instructions from the section "Install from an Admin Workstation VM". First, provision the production VMs alongside the existing staging VMs:
$ ssh -L 5900:localhost:5902 root@<your IP>
[...]
root@sd-staging:~# cd securedrop
root@sd-staging:~# source .venv/bin/activate
(.venv) root@sd-staging:~/securedrop# molecule create -s libvirt-prod-focal
Then follow the rest of the instructions on the Tails domain over Spice described above. You'll probably find it convenient to fetch Vagrant's base-box private key for SSH from the Tails domain, e.g.:
amnesia@amnesia:~$ wget -O .ssh/id_rsa https://raw.githubusercontent.com/hashicorp/vagrant/main/keys/vagrant
amnesia@amnesia:~$ chmod 600 .ssh/id_rsa
NOTE. You must configure Tails persistence before securedrop-admin setup
, even if you don't actually require your securedrop
clone to persist
across reboots of the Tails domain (for example, during one-off testing).
Without persistence configured, the setup
action will bog down the Tails RAM
disk (with the recommended 2 GB of RAM), and the domain is likely to lock up.
For securedrop-admin sdconfig
, you'll need to be ready with the following
values:
amnesia@amnesia:~$ ./securedrop-admin sdconfig
Username for SSH access to the servers: vagrant
[...]
Local IPv4 address for the Application Server: # from "virsh domifaddr libvirt-prod-focal_app-prod"
Local IPv4 address for the Monitor Server: # from "virsh domifaddr libvirt-prod-focal_mon-prod"
Hostname for Application Server: app-prod
Hostname for Monitor Server: mon-prod
[...]
To reset the staging or production VM scenarios, you'll need to do a bit of cleanup, e.g.:
root@sd-staging:~# cd securedrop
root@sd-staging:~/securedrop# source .venv/bin/activate
(.venv) root@sd-staging:~/securedrop# molecule destroy -s libvirt-prod-focal
(.venv) root@sd-staging:~/securedrop# virsh undefine libvirt-prod-focal_app-prod
(.venv) root@sd-staging:~/securedrop# virsh undefine libvirt-prod-focal_mon-prod
(.venv) root@sd-staging:~/securedrop# virsh vol-delete --pool default libvirt-prod-focal_app-prod.img
(.venv) root@sd-staging:~/securedrop# virsh vol-delete --pool default libvirt-prod-focal_mon-prod.img
Then you can redo:
(.venv) root@sd-staging:~/securedrop# molecule create -s libvirt-prod-focal
You can use journalctl [-f]
to check on the progress of cloud-init.
By default, each instance of this module provisions a
c3.small.x86
(alas) server at t1.small.x86
$0.07 $0.50 per hour.
A running instance therefore costs:
Period | Cost |
---|---|
Hourly | |
Daily | |
Monthly |