Builds given git repos periodically and automatically deploys them to a Kubernetes cluster. Serves as a homebrew "replacement" for Heroku, to publish your own pet projects. Built with off-the-shelf tools: Kubernetes and Jenkins.
See the previous Vaadin Shepherd.
Tip: The shepherd-cli is a far easier way to add your projects. This way also works, but is more low-level, requires you to write kubernetes yaml config files and fiddle with Jenkins, and is more error-prone. shepherd-cli calls this project anyway, but its project config file is far simpler.
Shepherd expects the following from your project:
Dockerfile
at the root of its git repo.docker build --no-cache -t test/xyz:latest .
command;
The image can be run via docker run --rm -ti -p8080:8080 test/xyz
command.Generally, all you need is to place an appropriate Dockerfile
to the root of your project's git repository.
See the following projects for examples:
For Maven+war project, please use the following Dockerfile
:
# 1. Build the image with: docker build --no-cache -t test/xyz:latest .
# 2. Run the image with: docker run --rm -ti -p8080:8080 test/xyz
# The "Build" stage. Copies the entire project into the container, into the /app/ folder, and builds it.
FROM maven:3.9.1-eclipse-temurin-17 AS BUILD
COPY . /app/
WORKDIR /app/
RUN mvn -C clean test package -Pproduction
# At this point, we have the app WAR file in
# at /app/target/*.war
RUN mv /app/target/*.war /app/target/ROOT.war
# The "Run" stage. Start with a clean image, and copy over just the app itself, omitting gradle, npm and any intermediate build files.
FROM tomcat:10-jre17
COPY --from=BUILD /app/target/ROOT.war /usr/local/tomcat/webapps/
EXPOSE 8080
If your app fails to start, you can get the container logs by running:
$ docker exec -ti CONTAINER_ID /bin/bash
$ cat /usr/local/tomcat/logs/localhost.*
Vaadin addons are set up in a bit of an anti-pattern way:
src/test/
folder;mvn jetty:run
The downside is that there's no support for production, and running via mvn jetty:run
requires Maven+Maven Repo+node_modules to be packaged in the docker image, increasing its size.
The solution is to:
Main.java
in src/test/java/
which runs the app in Vaadin Boot.See #16 for more details; example project can be found at parttio/breeze-theme.
For addons that run via test-scoped Spring Boot, see the Dockerfile
of the parttio/parity-theme example project.
This is a documentation on how to get things running quickly in cloud VM.
Shepherd needs/uses the following components:
shepherd-build
which builds new docker image, uploads to microk8s registry and restarts pods.Get a VM with 8-16 GB of RAM and Ubuntu x86-64; use Ubuntu latest LTS. ssh into the machine as root & update. Once you're in, we'll install and configure microk8s and jenkins.
First, install a bunch of useful utility stuff, then enter byobu:
$ apt update && apt -V dist-upgrade
$ apt install byobu snapd curl vim fish
$ byobu
$ sudo update-alternatives --config editor # select vim.basic
Then, setup firewall, to shield ourselves during the followup installations steps. For example, Jenkins by default listens on all interfaces - we don't want that:
$ ufw allow ssh
$ ufw enable
$ ufw status
First, install Java since Jenkins depends on it:
$ apt install openjdk-11-jre
Then, Install LTS Jenkins on Linux via apt. That way, Jenkins integrates into SystemD and will start automatically when the machine is rebooted.
Check log to see that everything is okay: journalctl -u jenkins
, journalctl -u jenkins -f
.
Now, edit Jenkins config file via systemctl edit jenkins
and add the following:
[Service]
Environment="JENKINS_LISTEN_ADDRESS=127.0.0.1"
Environment="JAVA_OPTS=-Djava.awt.headless=true -Xmx512m"
Restart Jenkins via systemctl restart jenkins
.
ssh to the machine via ssh -L localhost:8080:localhost:8080 -L localhost:10443:localhost:10443 root@xyz
,
then access Jenkins via localhost:8080, then
Configure Jenkins:
admin
user, with the password admin
. This is okay since we'll need ssh port-forwarding to access Jenkins anyway.Then, install docker and add permissions to the Jenkins user to run it:
$ apt install docker.io
$ usermod -aG docker jenkins
$ reboot
Install microk8s:
$ snap install microk8s --classic
$ microk8s disable ha-cluster --force
$ microk8s status
Disabling ha-cluster
removes support for high availability & cluster but lowers the CPU usage significantly: #1577
Setup firewall:
$ ufw allow in on cni0
$ ufw allow out on cni0
$ ufw default allow routed
$ ufw allow http
$ ufw allow https
$ ufw status
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
Anywhere on vxlan.calico ALLOW Anywhere
Anywhere on cali+ ALLOW Anywhere
Anywhere on cni0 ALLOW Anywhere
80/tcp ALLOW Anywhere
443 ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
Anywhere (v6) on vxlan.calico ALLOW Anywhere (v6)
Anywhere (v6) on cali+ ALLOW Anywhere (v6)
Anywhere (v6) on cni0 ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
Anywhere ALLOW OUT Anywhere on vxlan.calico
Anywhere ALLOW OUT Anywhere on cali+
Anywhere ALLOW OUT Anywhere on cni0
Anywhere (v6) ALLOW OUT Anywhere (v6) on vxlan.calico
Anywhere (v6) ALLOW OUT Anywhere (v6) on cali+
Install more stuff to microk8s and setup user access:
$ microk8s enable dashboard
$ microk8s enable dns
$ microk8s enable registry
$ microk8s enable ingress:default-ssl-certificate=v-herd-eu-welcome-page/v-herd-eu-ingress-tls
$ microk8s enable cert-manager
$ usermod -aG microk8s jenkins
Add alias mkctl="microk8s kubectl"
to ~/.config/fish/config.fish
Verify that microk8s is running:
$ microk8s dashboard-proxy
(More commands & info at Playing with Kubernetes ).
To install Shepherd scripts, run:
$ cd /opt && sudo git clone https://github.com/mvysny/shepherd
Everything is now configured. To update Shepherd scripts, simply run
$ cd /opt/shepherd && sudo git pull --rebase
Follow the Certbot/Let's Encrypt: https://microk8s.io/docs/addon-cert-manager tutorial. The tutorial doesn't explain much, but it definitely works. Explanation here: Let's Encrypt HTTPS/SSL for Microk8s:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: lets-encrypt
spec:
acme:
email: my-email
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: public
We need to share one secret for v-herd.eu
across multiple namespaces (multiple apps
mapping via ingress to https://v-herd.eu/app1
. All solutions are at
Cert Manager: syncing secrets across namespaces
We'll solve this by reconfiguring the Nginx default certificate.
First we'll create a simple static webpage which makes CertManager
obtain the certificate from Let's Encrypt and store secret to
v-herd-eu-welcome-page/v-herd-eu-ingress-tls
: welcome-page.yaml:
$ mkctl apply -f welcome-page.yaml
After a while, https should start working; test it out https://v-herd.eu.
We already registered the --default-ssl-certificate=v-herd-eu-welcome-page/v-herd-eu-ingress-tls
option in the nginx-controller
deployment,
when we enabled ingress
above. You can verify that the configuration took effect, by
taking a look at the nginx-ingress-microk8s-controller
DaemonSet in microk8s Dashboard.
To configure the welcome page shown when browsing to https://v-herd.eu
, go to the v-herd-eu-welcome-page/static-site-vol
volume
folder. The folder should be at /var/snap/microk8s/common/default-storage/v-herd-eu-welcome-page-static-site-pvc-*
,
see the microk8s storage docs for details.
Example of the index.html
can be found at #12.
If unchecked, docker build images will consume all disk space. Add the following cron daily job to purge the images:
$ vim /etc/cron.daily/docker-prune
#!/bin/bash
set -e -o pipefail
docker system prune -f
$ chmod a+x /etc/cron.daily/docker-prune
First, change Jenkins password to something more powerful. Then,
set this password in /etc/shepherd/java/config.json
so that shepherd-cli
can still manage the setup. Then reconfigure jenkins context root to /jenkins
as described at Jenkins behind reverse proxy.
We'll setup nginx to unwrap https and redirect traffic to Jenkins. First,
install nginx via sudo apt install nginx-full
. Then, setup certificate retrieval
as described at Let's Encrypt+Microk8s+nginx,
chapter "nginx".
Edit /etc/nginx/sites-available/default
and make it look like this:
server {
listen 8443 ssl default_server;
listen [::]:8443 ssl default_server;
ssl_certificate /etc/nginx/secret/tls.crt;
ssl_certificate_key /etc/nginx/secret/tls.key;
server_name _;
location /jenkins/ {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host v-herd.eu;
proxy_set_header X-Forwarded-Port 8443;
proxy_pass http://localhost:8080;
# proxy_cookie_domain localhost $host; # not necessary?? Maybe Jenkins produces correct cookies thanks to X-Forwarded or other settings
}
}
Then, sudo systemctl reload nginx
. Jenkins is now accessible at https://v-herd.eu:8443/jenkins
.
Documents the most common steps after Shepherd is installed.
First, decide on the project id, e.g. vaadin-boot-example-gradle
. The project ID will go into k8s namespace;
Namespace must be a valid DNS Name,
which means that the project ID must:
shepherd-
prefix)Now call shepherd-new vaadin-boot-example-gradle 256Mi
to create the project's k8s
resource config file yaml (named /etc/shepherd/k8s/PROJECT_ID.yaml
).
See chapter below on tips on k8s yaml contents, for mem/cpu, env variables, database, Vaadin monitoring, persistent storage, ...
Now, create a Jenkins job:
H/5 * * * *
export BUILD_MEMORY=1500m && /opt/shepherd/shepherd-build vaadin-boot-example-gradle
The shepherd-build
builder will copy the resource yaml, modify image hash, then mkctl apply -f
.
Optionally, add the following env variables to the shepherd-build
:
BUILD_MEMORY
: (optional) how much memory the build image will get. Defaults to 1024m
, but for Gradle use 1500m
BUILD_ARGS
: (optional) for example add export BUILD_ARGS='--build-arg offlinekey=foo'
to Jenkins.
Then, add the following to your Dockerfile
: ARG offlinekey
; ENV VAADIN_OFFLINE_KEY=$offlinekey
to pass in
the Vaadin offline key to perform the production build with.DOCKERFILE
: (optional) alternative name of the Dockerfile fileImportant: Make sure to use the "Server license key" and NOT the "Offline development license key": The offline dev key depends on the machine ID which may change easily when building in Docker.
shepherd-PROJECT_ID
namespace.spec.template.spec.containers[0].resources.limits
partTo expose a project on additional DNS domain, add:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-custom-dns-vaadin3-fake
namespace: shepherd-PROJECT_ID # use the right app namespace!
annotations:
cert-manager.io/cluster-issuer: lets-encrypt
spec:
tls:
- hosts:
- vaadin3.fake
secretName: vaadin3-fake-ingress-tls
rules:
- host: "vaadin3.fake"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service
port:
number: 8080
Environment variables in Kubernetes:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: main
image: <<IMAGE_AND_HASH>>
env:
- name: VAADIN_OFFLINE_KEY
value: "[contents of offlineKey file here]"
Please read the Vaadin app with persistent PostgreSQL in Kubernetes for information on this setup. In short, add the following yaml to the app's kubernetes config yaml file:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: shepherd-TODO
spec:
accessModes: [ReadWriteOnce]
resources: { requests: { storage: 512Mi } }
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql-deployment
namespace: shepherd-TODO
spec:
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
volumes:
- name: postgres-vol
persistentVolumeClaim:
claimName: postgres-pvc
containers:
- name: postgresql
image: postgres:15.2
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: mysecretpassword
resources:
requests:
memory: "2Mi"
cpu: 0
limits:
memory: "128Mi"
cpu: "500m"
volumeMounts:
- name: postgres-vol
mountPath: /var/lib/postgresql/data
---
apiVersion: v1
kind: Service
metadata:
name: postgres # this will also be the DNS name of the VM running this service.
namespace: shepherd-TODO
spec:
selector:
app: postgres-pod
ports:
- port: 5432
shepherd-TODO
namespace to the appropriate namespace of your app.jdbc:postgresql://postgres:5432/postgres
URL, with the postgres
username and
mysecretpassword
password.Spring Security introduces a servlet filter which uses HTTP 302 to redirect the user to the login page.
Unfortunately that redirect rule is not rewritten by ingress by default, causing the app
to redirect to https://v-herd.eu/login
instead to https://v-herd.eu/yourapp/login
. The fix is easy, just
add the following rewrite rules to the Ingress metadata/annotations/
list of the yaml config file:
nginx.ingress.kubernetes.io/proxy-redirect-from
: https://v-herd.eu/
nginx.ingress.kubernetes.io/proxy-redirect-to
: https://v-herd.eu\$1
See #18 for more details.
TODO: Vaadin monitoring, ...
The app Kubernetes config yaml file is located at /etc/shepherd/k8s/PROJECT_ID.yaml
.
You can freely edit the file but the changes will be applied automatically only after the app is
built in Jenkins. To apply the changes faster, run the ./shepherd-apply PROJECT_ID
script manually from bash.
mkctl delete -f /etc/shepherd/k8s/PROJECT_ID.yaml
ssh to the machine with proper port forwarding:
$ ssh -L localhost:8080:localhost:8080 -L localhost:10443:localhost:10443 root@xyz
$ byobu
$ microk8s dashboard-proxy
Browse:
Work in progress - will add more.
To list all projects, simply list the contents of the /etc/shepherd/k8s/
folder. There will
be a bunch of yaml files corresponding to individual projects. The yaml naming is PROJECT_ID.yaml
, so
you can obtain the project ID from the yaml file name.
In other words:
foo
will have a file named /etc/shepherd/k8s/foo.yaml
https://v-herd.eu/foo
If you browse to the app, and you'll get nginx 404:
Endpoints
shows 127.0.0.1
kubernetes.io/ingress.class: nginx
from your ingress yaml, remove the ingress rule via mkctl delete -f
and add it back.127.0.0.1
in a couple of seconds.If you browse to the app, it does nothing, and then you'll get nginx 504:
ufw
whether it helps.
ufw enable
and browse the app again - this usually helps.ufw disable && ufw reset
, then re-add all rules back, then ufw enable
.If microk8s uses lots of CPU
ha-cluster
: #8More troubleshooting tips:
If you get No ED25519 host key is known for xyz.com and you have requested strict checking.
in Jenkins:
ssh-keygen -l -F xyz.com
command if the host key is already on your dev machine.v-herd.eu
, then run sudo su jenkins
, then ssh xyz.com
. ssh will print the key fingerprint; if it matches, press y.
The ssh command may fail, but the key is now stored in /var/lib/jenkins/.ssh/known_hosts
.java.text.ParseException: Invalid JWT serialization: Missing dot delimiter(s)
: the VAADIN_OFFLINE_KEY
env variable may be
empty. Make sure that you have ARG offlinekey
in your Dockerfile
and that you're passing the contents of the key correctly to shepherd-build
using BUILD_ARGS
as described above.Unexpected exit value: 137
means that the build needs more memory. Increase the value of the BUILD_MEMORY
env variable passed to shepherd-build
.Configuration: Every project has its k8s resource configuration file in /etc/shepherd/k8s/
:
localhost:32000/shepherd/project_id
docker build --nocache -t localhost:32000/shepherd/project_id -m 1500m --cpu-period 100000 --cpu-quota 200000 --build-arg key=value .