Open bploeckelman opened 1 year ago
Ubuntu 22 (LTS)
, Docker 24
(upgraded from 23)1 AMD vCPU
, 1GB RAM
, 25GB nVME storage
A new droplet only has a root
user account by default, so at least one non-root user account has to be created. There are a couple linux groups that it's worth adding the user to as well, in order to access things like docker commands and web server logs without sudo (as well as adding the user to the sudoers group).
$ adduser brian
$ usermod -aG sudo brian
: for unlimited power$ usermod -aG docker brian
: for docker
access$ usermod -aG www-data brian
: for httpd
access/home/brian/authorized_keys
New droplets come with UFW pre-configured to allow SSH access and in the case of creating a droplet based on the Docker image it also allows Docker API port access.
There's a relatively new feature available with Digital Ocean that makes the UFW firewall unnecessary: Cloud Firewall. These instructions describe
UFW
setup (default):22
(SSH), 2375, 2376
(docker)Cloud Firewall
setup:22
(SSH), 80, 443
(HTTP/S)ICMP
, all TCP
, all UDP
Once the Cloud Firewall is configured and attached to the droplet, all that's left is to disable UFW: $ sudo ufw disable
The web server will be run as a Docker container, and it will require a couple directories and files setup on the host machine to be mapped into the container. Here are the host machine directories and their mappings:
/srv/httpd
: top level folder for httpd
server related stuff
compose.yaml
to configure the httpd
container/srv/httpd/html
: web root for static content
/usr/local/apache2/htdocs
/srv/httpd/html/.well-known
for Let's Encrypt / certbot to perform its domain validation request/srv/httpd/conf
: main httpd
config file
/usr/local/apache2/conf
/srv/httpd/logs
: host directory for httpd
container logs
/usr/local/apache2/logs
Let's Encrypt
certificate mappings, but we'll get back to that in the certs sectionWe'll need a couple adjustments to file and folder permissions to allow host machine user accounts to access and work with the files and folders on the host machine that are mapped into the web server container.
The httpd
image uses www-data
as user and group by default, with UID/GID: 33
, and since the droplet already had this user and group setup with the correct UID/GID, the httpd.conf
we're using left that setup in place.
In order to set up fine grained permissions, install ACL tools for Ubuntu:
$ sudo apt install acl
Then set up ACLs for various folders to allow access to the www-data
group, which the non-root user accounts that need to administer the server should be included in:
$ sudo setfacl -m g:www-data:rwx /srv/httpd
$ sudo setfacl -m default:g:www-data:rwx /srv/httpd/html
$ sudo setfacl -m g:www-data:rwx /srv/httpd/logs
$ sudo setfacl -m g:www-data:rx /srv/httpd/conf
Note that the default
setting is applied to the ACL for /srv/httpd/html
so that files copied into it from a GitHub Action would have those permissions associated with them rather than having to re-run the setfacl
command after running rsync
to copy files into the folder.
Then, for more convenient editing of httpd
config without sudo
:
$ sudo setfacl -m u:brian:rwx /srv/httpd/conf
Since this droplet is starting as a staging VM for lando.systems
I used a different domain name until we're ready to cutover. Since it doesn't really matter what the domain name is for these instructions, I just use domain.tld
as a placeholder.
These were the steps used to get the domain ready for use on this droplet.
ns[1,2,3].digitalocean.com
networking
-> domains
ns[1,2,3].digitalocean.com
, which should be setup by default)
A
record: point to droplet IP (or use @
for domain name root)
A
record(s) for any subdomains: foo.domain.tld
, none in this caseCNAME
record(s) for any wildcard aliases to existing A
records (for sub-subdomains I think?)$ dig domain.tld
-> should show A
record pointing to droplet public IP$ dig NS domain.tld
-> should show the NS
records pointing to Digital Ocean name servershttpd
containerSetting up the httpd
container via the Docker compose compose.yaml
file is straightforward, with the only nuance that we don't have the Let's Encrypt certificates or private key yet. So until we've run certbot the first time, the fullchain.pem
and privkey.pem
mappings are commented out in compose.yaml
and we also comment out any SSL vhost config in httpd.conf
.
It took a bit of trial and error to dial in the httpd.conf
, but using daniel-ferradal-marquez/single-file-httpd.conf as a reference ended up working well. Daniel's readme in that repository explains the motivation extremely well and provides some direct links to helpful docs. It's a great resource!
While normally httpd
splits much of its config out into modules under /usr/local/apache2/conf.d
for different functional pieces (proxy config, SSL vhost, etc...), having all the httpd
config for this relatively simple server in a single file works out much nicer. I took the liberty of adding a lot of comments to the httpd.conf
to better explain what all the weird directives and config values do. It was tedious but educational.
Strangely enough, the default httpd 2.4
Docker image didn't include a mime.types
file, so I had to chase that down and include it from the host machine as well: mime.types
Since we know that certbot
will expect non-SSL access to /.well-known
for the domain validation request, the non-SSL port 80
vhost is configured to redirect all requests to the SSL port 443
vhost, except for requests to /.well-known
and subdirectories.
With all that in place, spinning up the httpd
container is trivial:
$ pushd /srv/httpd
$ docker compose up -d
Prerequisites:
Certbot uses plugins to simplify the setup for different web servers and OS's. We can't use the --apache
plugin out of the box here because it delegates to the apache2ctl
program, which isn't accessible outside of the httpd
container without extra config that's probably not worth the hassle.
I ended up using --webroot
because the only other option, --standalone
, works by having certbot spin up its own temporary web server with port 80 and the /.well-known
directory accessible for the verification challenge. Using that setup would require httpd
to be spun down during renewal cycles and then back up after renewal so that httpd
doesn't interfere with certbot's temporary web server. This would be a lousy setup since we want the renewal to be completely automated. The --webroot
setup allows us to just point at the document root for httpd
(in our case /srv/httpd/html
) where we've already set up a /.well-known
folder with the correct access permissions for the www-data
group so that httpd
can read and write to it.
Here are the remaining steps:
certbot
$ sudo snap install --classic certbot
$ sudo ln -s /snap/bin/certbot /usr/bin/certbot
certbot
's first-time cert generation script
$ sudo certbot certonly --webroot -w /srv/httpd/html -d domain.tld
This should result in the following output if it runs successfully:
$ sudo certbot certonly --webroot -w /srv/httpd/html -d domain.tld
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Requesting a certificate for domain.tld
Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/domain.tld/fullchain.pem
Key is saved at: /etc/letsencrypt/live/domain.tld/privkey.pem
This certificate expires on 2024-02-19.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.
From here we can verify that the certbot auto-renewal is scheduled:
$ systemctl list-timers
> Tue 2023-11-21 19:56:00 UTC 15h left Tue 2023-11-21 02:46:01 UTC 1h 23min ago snap.certbot.renew.timer snap.certbot.renew.service
The only remaining step is to get the website's static content into the /srv/httpd/html
directory on the host droplet.
This isn't done yet, but it should just be a matter of installing git and cloning the repository either directly in the document root or maybe via symlink.
I'll update this issue with more details once I have that piece done too.
Here's the approach I'm going to go with:
authorized_keys
on the VM.github/workflows/deploy.yml
file to run automated deployments on the VMExample GitHub Actions deploy script:
name: Deploy to Digital Ocean VM
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up SSH
uses: webfactory/ssh-agent@v0.5.3
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: SSH and Deploy
run: |
ssh -o StrictHostKeyChecking=no user@your-vm-ip-address << 'EOF'
cd /srv/httpd/website
git pull origin main
rsync -avz --chown=www-data:www-data --chmod=ug=rwx,o=rx --exclude '.git*' --delete /srv/httpd/website/public/ /srv/httpd/html/
sudo setfacl -m g:www-data:rwx /srv/httpd/html
EOF
A couple notes about this script:
rsync
for efficient file copying (skipping unchanged files, etc...)
-avz
: archive mode, verbose, compress data during xfer--chown
: change file/folder ownership so web server can serve it--exclude
: don't copy .git
folder and contents... TODO only copy /public
rather than excluding .git
and others--delete
: deletes other files that weren't copied from target directory TODO may want to leave this out too in case we add stuff manually that shouldn't be deletedhttpd
can access the content in the document root
Unknowns:
umask
or ACL
settings for the runner?Create the user account on the VM and add it to the web server group
$ sudo adduser github
$ sudo usermod -aG www-data github
Create a key pair for the github
user, set file permissions, and add public key to authorized_keys
:
$ sudo mkdir /home/github/.ssh
$ sudo -u github ssh-keygen -t ed25519 -C "GitHub Actions"
$ sudo cat /home/github/.ssh/id_ed25519.pub >/home/github/.ssh/authorized_keys
$ sudo chmod 700 /home/github/.ssh
$ sudo chmod 600 /home/github/.ssh/id_ed25519
$ sudo chmod 644 /home/github/.ssh/id_ed25519.pub
$ sudo chmod 600 /home/github/.ssh/authorized_keys
Add the public key as a GitHub deploy key
Add the private key as a GitHub repository secret, in this case I named it SSH_PRIVATE_KEY
.
Updating the ACL on /srv/httpd/html
to be the default
ACL for new files and folders should make it so the github
user doesn't need to run any sudo
commands. Since the github
user is in the www-data
group it should be able to copy files into /srv/httpd/html
just fine.
I'll get the rest of this set up and tested out, then add more comments if there's other items that need adjustment.
It took a couple more iterations to wrap this bit up, file permissions, repo secrets, deploy script, etc... as of 1ec1326 the deploy script works. I'll add details about the required changes later.
Manual config for running an rsync command in the GitHub deploy action never quite worked correctly. Switching it to use the up9cloud/action-rsync
plugin seems to have resolved the issues.
Easiest way to get the site tested and finalized is just to spin up a new VM on digital ocean, set it up with apache, temporary domain, let's encrypt, etc. Once it's operational and we've verified everything works, just point the existing domain to the new VM and spin down the old one (after archiving of course).