mvysny / shepherd

Build & run apps automatically
1 stars 0 forks source link

Shepherd

Builds given git repos periodically and automatically deploys them to a Kubernetes cluster. Serves as a homebrew "replacement" for Heroku, to publish your own pet projects. Built with off-the-shelf tools: Kubernetes and Jenkins.

See the previous Vaadin Shepherd.

Adding Your Project To Shepherd

Tip: The shepherd-cli is a far easier way to add your projects. This way also works, but is more low-level, requires you to write kubernetes yaml config files and fiddle with Jenkins, and is more error-prone. shepherd-cli calls this project anyway, but its project config file is far simpler.

Shepherd expects the following from your project:

  1. It must have Dockerfile at the root of its git repo.
  2. The Docker image can be built via the docker build --no-cache -t test/xyz:latest . command; The image can be run via docker run --rm -ti -p8080:8080 test/xyz command.
  3. You can now register the project to Shepherd. Continue to the "Adding a project" chapter below.

Generally, all you need is to place an appropriate Dockerfile to the root of your project's git repository. See the following projects for examples:

  1. Gradle+Embedded Jetty packaged as zip: vaadin-boot-example-gradle, vaadin14-boot-example-gradle, karibu-helloworld-application, beverage-buddy-vok, vok-security-demo
  2. Maven+Embedded Jetty packaged as zip: vaadin-boot-example-maven
  3. Maven+Spring Boot packaged as executable jar: Liukuri, my-hilla-app.

Maven+WAR

For Maven+war project, please use the following Dockerfile:

# 1. Build the image with: docker build --no-cache -t test/xyz:latest .
# 2. Run the image with: docker run --rm -ti -p8080:8080 test/xyz

# The "Build" stage. Copies the entire project into the container, into the /app/ folder, and builds it.
FROM maven:3.9.1-eclipse-temurin-17 AS BUILD
COPY . /app/
WORKDIR /app/
RUN mvn -C clean test package -Pproduction
# At this point, we have the app WAR file in
# at /app/target/*.war
RUN mv /app/target/*.war /app/target/ROOT.war

# The "Run" stage. Start with a clean image, and copy over just the app itself, omitting gradle, npm and any intermediate build files.
FROM tomcat:10-jre17
COPY --from=BUILD /app/target/ROOT.war /usr/local/tomcat/webapps/
EXPOSE 8080

If your app fails to start, you can get the container logs by running:

$ docker exec -ti CONTAINER_ID /bin/bash
$ cat /usr/local/tomcat/logs/localhost.*

Vaadin Addons

Vaadin addons are set up in a bit of an anti-pattern way:

The downside is that there's no support for production, and running via mvn jetty:run requires Maven+Maven Repo+node_modules to be packaged in the docker image, increasing its size.

The solution is to:

See #16 for more details; example project can be found at parttio/breeze-theme.

For addons that run via test-scoped Spring Boot, see the Dockerfile of the parttio/parity-theme example project.

Shepherd Internals

This is a documentation on how to get things running quickly in cloud VM.

Shepherd needs/uses the following components:

Installing Shepherd

Get a VM with 8-16 GB of RAM and Ubuntu x86-64; use Ubuntu latest LTS. ssh into the machine as root & update. Once you're in, we'll install and configure microk8s and jenkins.

First, install a bunch of useful utility stuff, then enter byobu:

$ apt update && apt -V dist-upgrade
$ apt install byobu snapd curl vim fish
$ byobu
$ sudo update-alternatives --config editor     # select vim.basic

Then, setup firewall, to shield ourselves during the followup installations steps. For example, Jenkins by default listens on all interfaces - we don't want that:

$ ufw allow ssh
$ ufw enable
$ ufw status

Jenkins

First, install Java since Jenkins depends on it:

$ apt install openjdk-11-jre

Then, Install LTS Jenkins on Linux via apt. That way, Jenkins integrates into SystemD and will start automatically when the machine is rebooted.

Check log to see that everything is okay: journalctl -u jenkins, journalctl -u jenkins -f. Now, edit Jenkins config file via systemctl edit jenkins and add the following:

[Service]
Environment="JENKINS_LISTEN_ADDRESS=127.0.0.1"
Environment="JAVA_OPTS=-Djava.awt.headless=true -Xmx512m"

Restart Jenkins via systemctl restart jenkins.

ssh to the machine via ssh -L localhost:8080:localhost:8080 -L localhost:10443:localhost:10443 root@xyz, then access Jenkins via localhost:8080, then Configure Jenkins:

Docker

Then, install docker and add permissions to the Jenkins user to run it:

$ apt install docker.io
$ usermod -aG docker jenkins
$ reboot

Microk8s

Install microk8s:

$ snap install microk8s --classic
$ microk8s disable ha-cluster --force
$ microk8s status

Disabling ha-cluster removes support for high availability & cluster but lowers the CPU usage significantly: #1577

Setup firewall:

$ ufw allow in on cni0
$ ufw allow out on cni0
$ ufw default allow routed
$ ufw allow http
$ ufw allow https
$ ufw status
Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere                  
Anywhere on vxlan.calico   ALLOW       Anywhere                  
Anywhere on cali+          ALLOW       Anywhere                  
Anywhere on cni0           ALLOW       Anywhere                  
80/tcp                     ALLOW       Anywhere                  
443                        ALLOW       Anywhere                  
22/tcp (v6)                ALLOW       Anywhere (v6)             
Anywhere (v6) on vxlan.calico ALLOW       Anywhere (v6)             
Anywhere (v6) on cali+     ALLOW       Anywhere (v6)             
Anywhere (v6) on cni0      ALLOW       Anywhere (v6)             
80/tcp (v6)                ALLOW       Anywhere (v6)             
443 (v6)                   ALLOW       Anywhere (v6)             

Anywhere                   ALLOW OUT   Anywhere on vxlan.calico  
Anywhere                   ALLOW OUT   Anywhere on cali+         
Anywhere                   ALLOW OUT   Anywhere on cni0          
Anywhere (v6)              ALLOW OUT   Anywhere (v6) on vxlan.calico
Anywhere (v6)              ALLOW OUT   Anywhere (v6) on cali+    

Install more stuff to microk8s and setup user access:

$ microk8s enable dashboard
$ microk8s enable dns
$ microk8s enable registry
$ microk8s enable ingress:default-ssl-certificate=v-herd-eu-welcome-page/v-herd-eu-ingress-tls
$ microk8s enable cert-manager
$ usermod -aG microk8s jenkins

Add alias mkctl="microk8s kubectl" to ~/.config/fish/config.fish

Verify that microk8s is running:

$ microk8s dashboard-proxy

(More commands & info at Playing with Kubernetes ).

Shepherd

To install Shepherd scripts, run:

$ cd /opt && sudo git clone https://github.com/mvysny/shepherd

Everything is now configured. To update Shepherd scripts, simply run

$ cd /opt/shepherd && sudo git pull --rebase

Enabling HTTPS/SSL

Follow the Certbot/Let's Encrypt: https://microk8s.io/docs/addon-cert-manager tutorial. The tutorial doesn't explain much, but it definitely works. Explanation here: Let's Encrypt HTTPS/SSL for Microk8s:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: lets-encrypt
spec:
  acme:
    email: my-email
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource that will be used to store the account's private key.
      name: letsencrypt-account-key
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
      - http01:
          ingress:
            class: public

We need to share one secret for v-herd.eu across multiple namespaces (multiple apps mapping via ingress to https://v-herd.eu/app1. All solutions are at Cert Manager: syncing secrets across namespaces

We'll solve this by reconfiguring the Nginx default certificate. First we'll create a simple static webpage which makes CertManager obtain the certificate from Let's Encrypt and store secret to v-herd-eu-welcome-page/v-herd-eu-ingress-tls: welcome-page.yaml:

$ mkctl apply -f welcome-page.yaml

After a while, https should start working; test it out https://v-herd.eu.

We already registered the --default-ssl-certificate=v-herd-eu-welcome-page/v-herd-eu-ingress-tls option in the nginx-controller deployment, when we enabled ingress above. You can verify that the configuration took effect, by taking a look at the nginx-ingress-microk8s-controller DaemonSet in microk8s Dashboard.

To configure the welcome page shown when browsing to https://v-herd.eu, go to the v-herd-eu-welcome-page/static-site-vol volume folder. The folder should be at /var/snap/microk8s/common/default-storage/v-herd-eu-welcome-page-static-site-pvc-*, see the microk8s storage docs for details. Example of the index.html can be found at #12.

After Installation

If unchecked, docker build images will consume all disk space. Add the following cron daily job to purge the images:

$ vim /etc/cron.daily/docker-prune
#!/bin/bash
set -e -o pipefail
docker system prune -f
$ chmod a+x /etc/cron.daily/docker-prune

Exposing Jenkins via https

First, change Jenkins password to something more powerful. Then, set this password in /etc/shepherd/java/config.json so that shepherd-cli can still manage the setup. Then reconfigure jenkins context root to /jenkins as described at Jenkins behind reverse proxy.

We'll setup nginx to unwrap https and redirect traffic to Jenkins. First, install nginx via sudo apt install nginx-full. Then, setup certificate retrieval as described at Let's Encrypt+Microk8s+nginx, chapter "nginx".

Edit /etc/nginx/sites-available/default and make it look like this:

server {
    listen 8443 ssl default_server;
    listen [::]:8443 ssl default_server;
    ssl_certificate /etc/nginx/secret/tls.crt;
    ssl_certificate_key /etc/nginx/secret/tls.key;

    server_name _;

    location /jenkins/ {
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Host v-herd.eu;
        proxy_set_header X-Forwarded-Port 8443;
        proxy_pass http://localhost:8080;
        # proxy_cookie_domain localhost $host;  # not necessary?? Maybe Jenkins produces correct cookies thanks to X-Forwarded or other settings
    }
}

Then, sudo systemctl reload nginx. Jenkins is now accessible at https://v-herd.eu:8443/jenkins.

Using Shepherd

Documents the most common steps after Shepherd is installed.

Adding a project

First, decide on the project id, e.g. vaadin-boot-example-gradle. The project ID will go into k8s namespace; Namespace must be a valid DNS Name, which means that the project ID must:

Now call shepherd-new vaadin-boot-example-gradle 256Mi to create the project's k8s resource config file yaml (named /etc/shepherd/k8s/PROJECT_ID.yaml). See chapter below on tips on k8s yaml contents, for mem/cpu, env variables, database, Vaadin monitoring, persistent storage, ...

Now, create a Jenkins job:

The shepherd-build builder will copy the resource yaml, modify image hash, then mkctl apply -f.

Optionally, add the following env variables to the shepherd-build:

Important: Make sure to use the "Server license key" and NOT the "Offline development license key": The offline dev key depends on the machine ID which may change easily when building in Docker.

k8s resource file contents tips

To expose a project on additional DNS domain, add:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-custom-dns-vaadin3-fake
  namespace: shepherd-PROJECT_ID    # use the right app namespace! 
  annotations:
    cert-manager.io/cluster-issuer: lets-encrypt
spec:
  tls:
    - hosts:
      - vaadin3.fake
      secretName: vaadin3-fake-ingress-tls
  rules:
    - host: "vaadin3.fake"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service
                port:
                  number: 8080

Environment variables in Kubernetes:

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
        - name: main
          image: <<IMAGE_AND_HASH>>
          env:
          - name: VAADIN_OFFLINE_KEY
            value: "[contents of offlineKey file here]"
Adding persistent PostgreSQL database

Please read the Vaadin app with persistent PostgreSQL in Kubernetes for information on this setup. In short, add the following yaml to the app's kubernetes config yaml file:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
  namespace: shepherd-TODO
spec:
  accessModes: [ReadWriteOnce]
  resources: { requests: { storage: 512Mi } }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresql-deployment
  namespace: shepherd-TODO
spec:
  selector:
    matchLabels:
      app: postgres-pod
  template:
    metadata:
      labels:
        app: postgres-pod
    spec:
      volumes:
        - name: postgres-vol
          persistentVolumeClaim:
            claimName: postgres-pvc
      containers:
        - name: postgresql
          image: postgres:15.2
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_PASSWORD
              value: mysecretpassword
          resources:
            requests:
              memory: "2Mi"
              cpu: 0
            limits:
              memory: "128Mi"
              cpu: "500m"
          volumeMounts:
            - name: postgres-vol
              mountPath: /var/lib/postgresql/data
---
apiVersion: v1
kind: Service
metadata:
  name: postgres  # this will also be the DNS name of the VM running this service.
  namespace: shepherd-TODO
spec:
  selector:
    app: postgres-pod
  ports:
    - port: 5432
  1. Don't forget to change the shepherd-TODO namespace to the appropriate namespace of your app.
  2. Configure your app to connect to the jdbc:postgresql://postgres:5432/postgres URL, with the postgres username and mysecretpassword password.
Spring Security

Spring Security introduces a servlet filter which uses HTTP 302 to redirect the user to the login page. Unfortunately that redirect rule is not rewritten by ingress by default, causing the app to redirect to https://v-herd.eu/login instead to https://v-herd.eu/yourapp/login. The fix is easy, just add the following rewrite rules to the Ingress metadata/annotations/ list of the yaml config file:

See #18 for more details.

More tips

TODO: Vaadin monitoring, ...

Manual changes to the project kubernetes yaml config file

The app Kubernetes config yaml file is located at /etc/shepherd/k8s/PROJECT_ID.yaml. You can freely edit the file but the changes will be applied automatically only after the app is built in Jenkins. To apply the changes faster, run the ./shepherd-apply PROJECT_ID script manually from bash.

Removing a project

Shepherd Administration

ssh to the machine with proper port forwarding:

$ ssh -L localhost:8080:localhost:8080 -L localhost:10443:localhost:10443 root@xyz
$ byobu
$ microk8s dashboard-proxy

Browse:

Shepherd API

Work in progress - will add more.

To list all projects, simply list the contents of the /etc/shepherd/k8s/ folder. There will be a bunch of yaml files corresponding to individual projects. The yaml naming is PROJECT_ID.yaml, so you can obtain the project ID from the yaml file name.

In other words:

Misc

Troubleshooting

If you browse to the app, and you'll get nginx 404:

If you browse to the app, it does nothing, and then you'll get nginx 504:

If microk8s uses lots of CPU

More troubleshooting tips:

If you get No ED25519 host key is known for xyz.com and you have requested strict checking. in Jenkins:

When Build Fails

Configuration

Configuration: Every project has its k8s resource configuration file in /etc/shepherd/k8s/:

Tips for CI

Tips on CI