lando-systems / website

Website: https://lando.systems
0 stars 0 forks source link

Setup new digital ocean VM #9

Open bploeckelman opened 11 months ago

bploeckelman commented 11 months ago

Easiest way to get the site tested and finalized is just to spin up a new VM on digital ocean, set it up with apache, temporary domain, let's encrypt, etc. Once it's operational and we've verified everything works, just point the existing domain to the new VM and spin down the old one (after archiving of course).

bploeckelman commented 11 months ago

VM Setup

Create a new droplet on Digital Ocean

Setup a non-root user account, add it to groups

A new droplet only has a root user account by default, so at least one non-root user account has to be created. There are a couple linux groups that it's worth adding the user to as well, in order to access things like docker commands and web server logs without sudo (as well as adding the user to the sudoers group).

Setup Cloud Firewall rather than UFW

New droplets come with UFW pre-configured to allow SSH access and in the case of creating a droplet based on the Docker image it also allows Docker API port access.

There's a relatively new feature available with Digital Ocean that makes the UFW firewall unnecessary: Cloud Firewall. These instructions describe

UFW setup (default):

Cloud Firewall setup:

Once the Cloud Firewall is configured and attached to the droplet, all that's left is to disable UFW: $ sudo ufw disable

Setup directory structure for Apache httpd web server

The web server will be run as a Docker container, and it will require a couple directories and files setup on the host machine to be mapped into the container. Here are the host machine directories and their mappings:

Setup file and folder permissions

We'll need a couple adjustments to file and folder permissions to allow host machine user accounts to access and work with the files and folders on the host machine that are mapped into the web server container.

The httpd image uses www-data as user and group by default, with UID/GID: 33, and since the droplet already had this user and group setup with the correct UID/GID, the httpd.conf we're using left that setup in place.

In order to set up fine grained permissions, install ACL tools for Ubuntu:

Then set up ACLs for various folders to allow access to the www-data group, which the non-root user accounts that need to administer the server should be included in:

Note that the default setting is applied to the ACL for /srv/httpd/html so that files copied into it from a GitHub Action would have those permissions associated with them rather than having to re-run the setfacl command after running rsync to copy files into the folder.

Then, for more convenient editing of httpd config without sudo:

Setup domain name

Since this droplet is starting as a staging VM for lando.systems I used a different domain name until we're ready to cutover. Since it doesn't really matter what the domain name is for these instructions, I just use domain.tld as a placeholder.

These were the steps used to get the domain ready for use on this droplet.

Setup and start httpd container

Setting up the httpd container via the Docker compose compose.yaml file is straightforward, with the only nuance that we don't have the Let's Encrypt certificates or private key yet. So until we've run certbot the first time, the fullchain.pem and privkey.pem mappings are commented out in compose.yaml and we also comment out any SSL vhost config in httpd.conf.

It took a bit of trial and error to dial in the httpd.conf , but using daniel-ferradal-marquez/single-file-httpd.conf as a reference ended up working well. Daniel's readme in that repository explains the motivation extremely well and provides some direct links to helpful docs. It's a great resource!

While normally httpd splits much of its config out into modules under /usr/local/apache2/conf.d for different functional pieces (proxy config, SSL vhost, etc...), having all the httpd config for this relatively simple server in a single file works out much nicer. I took the liberty of adding a lot of comments to the httpd.conf to better explain what all the weird directives and config values do. It was tedious but educational.

Strangely enough, the default httpd 2.4 Docker image didn't include a mime.types file, so I had to chase that down and include it from the host machine as well: mime.types

Since we know that certbot will expect non-SSL access to /.well-known for the domain validation request, the non-SSL port 80 vhost is configured to redirect all requests to the SSL port 443 vhost, except for requests to /.well-known and subdirectories.

With all that in place, spinning up the httpd container is trivial:

Certbot uses plugins to simplify the setup for different web servers and OS's. We can't use the --apache plugin out of the box here because it delegates to the apache2ctl program, which isn't accessible outside of the httpd container without extra config that's probably not worth the hassle.

I ended up using --webroot because the only other option, --standalone, works by having certbot spin up its own temporary web server with port 80 and the /.well-known directory accessible for the verification challenge. Using that setup would require httpd to be spun down during renewal cycles and then back up after renewal so that httpd doesn't interfere with certbot's temporary web server. This would be a lousy setup since we want the renewal to be completely automated. The --webroot setup allows us to just point at the document root for httpd (in our case /srv/httpd/html) where we've already set up a /.well-known folder with the correct access permissions for the www-data group so that httpd can read and write to it.

Here are the remaining steps:

This should result in the following output if it runs successfully:

$ sudo certbot certonly --webroot -w /srv/httpd/html -d domain.tld

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Requesting a certificate for domain.tld

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/domain.tld/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/domain.tld/privkey.pem
This certificate expires on 2024-02-19.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.

From here we can verify that the certbot auto-renewal is scheduled:

$ systemctl list-timers
> Tue 2023-11-21 19:56:00 UTC 15h left      Tue 2023-11-21 02:46:01 UTC 1h 23min ago         snap.certbot.renew.timer       snap.certbot.renew.service

Setup website content

The only remaining step is to get the website's static content into the /srv/httpd/html directory on the host droplet.

This isn't done yet, but it should just be a matter of installing git and cloning the repository either directly in the document root or maybe via symlink.

I'll update this issue with more details once I have that piece done too.

bploeckelman commented 11 months ago

Setup website content

Here's the approach I'm going to go with:

Example GitHub Actions deploy script:

name: Deploy to Digital Ocean VM

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout code
      uses: actions/checkout@v2

    - name: Set up SSH
      uses: webfactory/ssh-agent@v0.5.3
      with:
        ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

    - name: SSH and Deploy
      run: |
        ssh -o StrictHostKeyChecking=no user@your-vm-ip-address << 'EOF'
          cd /srv/httpd/website
          git pull origin main
          rsync -avz --chown=www-data:www-data --chmod=ug=rwx,o=rx --exclude '.git*' --delete /srv/httpd/website/public/ /srv/httpd/html/
          sudo setfacl -m g:www-data:rwx /srv/httpd/html
        EOF

A couple notes about this script:

Unknowns:

bploeckelman commented 11 months ago

Setup for GitHub Actions user

Create the user account on the VM and add it to the web server group

Create a key pair for the github user, set file permissions, and add public key to authorized_keys:

Add the public key as a GitHub deploy key

Add the private key as a GitHub repository secret, in this case I named it SSH_PRIVATE_KEY.

Additional notes

Updating the ACL on /srv/httpd/html to be the default ACL for new files and folders should make it so the github user doesn't need to run any sudo commands. Since the github user is in the www-data group it should be able to copy files into /srv/httpd/html just fine.

I'll get the rest of this set up and tested out, then add more comments if there's other items that need adjustment.

Time passes...

It took a couple more iterations to wrap this bit up, file permissions, repo secrets, deploy script, etc... as of 1ec1326 the deploy script works. I'll add details about the required changes later.

bploeckelman commented 10 months ago

Manual config for running an rsync command in the GitHub deploy action never quite worked correctly. Switching it to use the up9cloud/action-rsync plugin seems to have resolved the issues.