Deeployer's goal is to allow you to describe an environment (be it a dev or a prod environment) in a simple JSON file.
You write a single "deeployer.json" file and you can deploy easily to a Kubernetes cluster, or to a single server with docker-compose.
Deeployer's goal is not to be 100% flexible (you have Kubernetes for that), but rather to ease the deployment process for developers that do not necessarily master the intricacies of Kubernetes.
It aims to automate a number of processes, including easy backup setup, easy reverse proxy declaration, etc...
The Deeployer config file contains the list of containers that makes your environment:
deeployer.json
{
"$schema": "https://raw.githubusercontent.com/thecodingmachine/deeployer/master/deeployer.schema.json",
"version": "1.0",
"containers": {
"mysql": {
"image": "mysql:8.0",
"ports": [3306],
"env": {
"MYSQL_ROOT_PASSWORD": "secret"
}
},
"phpmyadmin": {
"image": "phpmyadmin/phpmyadmin:5.0",
"host": {
"url": "phpmyadmin.myapp.localhost",
"containerPort": 80
},
"env": {
"PMA_HOST": "mysql",
"MYSQL_ROOT_PASSWORD": "secret"
}
}
}
}
TODO: add volumes when ready
Let's have a closer look at this file.
The first line is optional:
"$schema": "https://raw.githubusercontent.com/thecodingmachine/deeployer/master/deeployer.schema.json",
(TODO: migrate the URL to a static website)
It declares the JsonSchema. We highly recommend to keep this line. Indeed, if you are using an IDE like Visual Studio Code or a JetBrain's IDE, you will get auto-completion and validation of the structure of the file right in your IDE!
Then, the "containers" section contains the list of containers for your environment. In the example above, we declare 2 containers: "mysql" and "phpmyadmin". Just like in "docker-compose", the name of the container is also an internal DNS record. So from any container of your environment, the "mysql" container is reachable at the "mysql" domain name.
For each container, you need to pass:
docker-compose
, this is not a list of ports that
will be shared with the host. This is simply a list of ports this image opens. This is particularly important if you
do deployments in Kubernetes (each port will be turned into a K8S service).You can pass environment variables using the "env" key:
"env": {
"MYSQL_ROOT_PASSWORD": "secret"
}
We will see later how to manage those secrets without storing them in full text. (TODO)
JSON is not the only format supported for the "Deeployer" config file. You can also write the file in Jsonnet.
Jsonnet? This is a very powerful data templating language for JSON.
By convention, you should name your Deeployer file deeployer.libsonnet
. (TODO: switch to deeployer.jsonnnet
)
Here is a sample file:
deeployer.libsonnet
{
local mySqlPassword = "secret",
local baseUrl = "myapp.localhost",
"$schema": "https://raw.githubusercontent.com/thecodingmachine/deeployer/master/deeployer.schema.json",
"version": "1.0",
"containers": {
"mysql": {
"image": "mysql:8.0",
"ports": [3306],
"env": {
"MYSQL_ROOT_PASSWORD": mySqlPassword
}
},
"phpmyadmin": {
"image": "phpmyadmin/phpmyadmin:5.0",
"host": {
"url": "phpmyadmin."+baseUrl
"containerPort": 80
},
"env": {
"PMA_HOST": "mysql",
"MYSQL_ROOT_PASSWORD": mySqlPassword
}
}
}
}
In the example above, we declare 2 variables and use these variables in the config file. See how the mySqlPassword
variable is used twice? Jsonnet allows us to avoid duplicating configuration code in all containers.
But there is even better! Let's assume you have a staging and a production environment. Maybe you want PhpMyAdmin on the staging environment (for testing purpose) but not on the production environment. Using Jsonnet, we can do this easily using 2 files:
deeployer.libsonnet
{
local mySqlPassword = "secret",
"$schema": "https://raw.githubusercontent.com/thecodingmachine/deeployer/master/deeployer.schema.json",
"version": "1.0",
"containers": {
"mysql": {
"image": "mysql:8.0",
"ports": [3306],
"env": {
"MYSQL_ROOT_PASSWORD": mySqlPassword
}
}
}
}
deeployer-dev.libsonnet
local prod = import "deeployer.libsonnet";
local baseUrl = "myapp.localhost";
prod + {
"containers"+: {
"phpmyadmin": {
"image": "phpmyadmin/phpmyadmin:5.0",
"host": {
"url": "phpmyadmin."+baseUrl,
"containerPort": 80
},
"env": {
"PMA_HOST": "mysql",
"MYSQL_ROOT_PASSWORD": prod.containers.mysql.env.MYSQL_ROOT_PASSWORD
}
}
}
}
TODO: test this.
When doing continuous deployment, it is common to put environment dependant variables and secrets in environment variables. Deeployer can access environment variables using the Jsonnet "env" external variable:
deeployer.libsonnet
local env = std.extVar("env");
{
local mySqlPassword = "secret",
"version": "1.0",
"containers": {
"mysql": {
"image": "mysql:8.0",
"ports": [3306],
"env": {
"MYSQL_ROOT_PASSWORD": env.MYSQL_PASSWORD
}
}
}
}
The first line is putting all environments variables in the env
local variable:
local env = std.extVar("env");
Then, you can access all environment variables from the machine running Deeployer using env.ENV_VARIABLE_NAME
.
Beware! If the environment variable is not set, Jsonnet will throw an error!
Deeployer offers HTTPS support out of the box using Let's encrypt.
deeployer.json
{
"version": "1.0",
"$schema": "https://raw.githubusercontent.com/thecodingmachine/deeployer/master/deeployer.schema.json",
"containers": {
"phpmyadmin": {
"image": "phpmyadmin/phpmyadmin:5.0",
"host": {
"url": "phpmyadmin.myapp.localhost",
"containerPort": 80,
"https": "enable"
},
"env": {
"PMA_HOST": "mysql"
"MYSQL_ROOT_PASSWORD": "secret"
}
}
},
"config": {
"https": {
"mail": "mymail@example.com"
}
}
}
In order to automatically get a certificate for your HTTPS website, you need to:
"https": "enable"
in your host
sectiondeeployer.json
file, add a "config.https.mail" entry specifying a mail address. This mail address
will be used to warn you, should something goes wrong with the certificate (for instance if the certificate is going
to expire soon)Please note that if you are using Kubernetes, you will need in addition to install CertManager in your cluster. See the relevant Kubernetes documentation below
Deeployer's goal is to allow you to describe a complete environment in a simple JSON file. It simplifies a lot the configuration by making a set of common assumptions on your configuration. Of course, the JSON config file does not let you express everything you can in a raw Kubernetes environment. This is by design.
However, there are times when you might need a very specific K8S feature. In this case, you can use JSONNET functions to dynamically alter the generated K8S configuration files.
To do this, you will need to use a deeployer.libsonnet
configuration file instead of a deeployer.json
configuration
file.
You can then use the hidden config.k8sextension
field to alter the generated configuration.
In the example below, we are adding 2 annotations to the container of the deployment:
{
"version": "1.0",
"containers": {
"phpmyadmin": {
"image": "phpmyadmin",
"ports": [
80
],
"host": {
"url": "myhost.com"
}
}
},
"config": {
k8sextension(k8sConf)::
k8sConf + {
phpmyadmin+: {
deployment+: {
spec+: {
template+: {
metadata+: {
annotations+: {
"prometheus.io/port": "8080",
"prometheus.io/scrape": "true"
}
}
}
}
}
}
}
}
}
What is going on here? We are describing in the config a k8sextension
function.
This JSONNET function is passed a JSON object representing the complete list of all the Kubernetes resources.
Using JSONNET, we extend that list to add annotations in one given container.
Good to know:
Resources stored in the JSON config object passed to k8sextension
is on two levels.
phpmyadmin
in the example above)deployment
)View the list of Kubernetes resources that will be generated using deeployer-k8s show
$ deeployer-k8s show
By default, Deeployer will look for a deeployer.libsonnet
or a deeployer.json
file in the current working directory.
You can specify an alternative name in the command:
$ deeployer-k8s show deeployer-dev.jsonnet
The "show" command is only used for debugging. In order to make an actual deployment, use the "apply" command:
$ deeployer-k8s apply --namespace target-namespace
Important: if you are using Deeployer locally, Deeployer will not use your Kubectl config by default. You need to pass the Kubectl configuration as an environment variable.
Finally, you can delete a complete namespace using:
$ deeployer-k8s delete --namespace target-namespace
This is equivalent to using:
$ kubectl delete namespace target-namespace
If a "kubeconfig" file is enough to connect to your environement, you can connect to your cluster
by setting the KUBE_CONFIG_FILE
environment variable.
KUBE_CONFIG_FILE
should contain the content of the kubeconfig file.You can connect to a GKE cluster by setting these environment variables:
GCLOUD_SERVICE_KEY
GCLOUD_PROJECT
GCLOUD_ZONE
GKE_CLUSTER
In order to have HTTPS support in Kubernetes, you need to install Cert Manager in your Kubernetes cluster. Cert Manager is a certificate management tool that acts cluster-wide. Deeployer configures Cert Manager to generate certificates using Let's encrypt.
You can install Cert Manager using their installation documentation. You do not need to create a "cluster issuer" as Deeployer will come with its own issuer.
You need to install Cert Manager v0.11+.
By default, in Kubernetes, all pods will be halted, and restarted. Even if the configuration did not change (Deeployer tries to redownload the latest version of the image). But in the case of some services (like a MySQL database or a Redis server), stopping and restarting the service will cause a disruption, for no good reason.
You can tell Deeployer to not restart a pod automatically using the "redeploy": "onConfigChange" option.
deeployer.json
{
"version": "1.0",
"$schema": "https://raw.githubusercontent.com/thecodingmachine/deeployer/master/deeployer.schema.json",
"containers": {
"mysql": {
"image": "mysql:8.0",
"ports": [3306],
"env": {
"MYSQL_ROOT_PASSWORD": "secret"
},
"redeploy": "onConfigChange"
}
}
}
With "redeploy": "onConfigChange"
, your pod will be changed only if the configuration is changed.
In Kubernetes, by default, Deeployer will stop and recreate the pod if there is only one pod (if you did not set the replicas
property).
If you configured replicas
to a value greater than 1, Deeployer will use a "RollingUpdate" for this pod.
To deploy with deeployer-compose, you need to setup the following alias on your local machine since deeployer in its first versions is only accessible thanks to its official docker-image :
alias deeployer-compose="docker run --rm -it -e \"JSON_ENV=\$(jq -n env)\" -v $(pwd):/var/app -v /var/run/docker.sock:/var/run/docker.sock thecodingmachine/deeployer:latest deeployer-compose"
In order to use Deeployer locally, you need to install:
Deeployer can be run via Docker. Installation is as easy as adding a few aliases to your ~/.bashrc
(if you are using Bash)
~/.bashrc
alias deeployer-k8s='docker run --rm -it -e "JSON_ENV=\$\(jq -n env\)" -v $(pwd):/var/app thecodingmachine/deeployer:latest deeployer-k8s'
alias deeployer-compose='docker run --rm -it -e "JSON_ENV=\$\(jq -n env\)" -v $(pwd):/var/app -v /var/run/docker.sock:/var/run/docker.sock thecodingmachine/deeployer:latest deeployer-compose'
alias deeployer-self-update="docker pull thecodingmachine/deeployer:latest"
Deeployer is under heavy development. Do not forget to update the Docker image regularly:
$ deeployer-self-update
To use deeployer in gitlab ci, you'll need to specify in your .gitlab-ci.yml file a job for the deployment like in the following example :
deeploy:
image: thecodingmachine/deeployer:latest
stage: deploy
variables:
KUBE_CONFIG_FILE: ${KUBE_CONFIG}
script:
- deeployer-k8s apply --namespace ${CI_PROJECT_PATH_SLUG}-${CI_COMMIT_REF_SLUG}
- curl "https://bigbro.thecodingmachine.com/gitlab/call/start-environment?projectId=${CI_PROJECT_ID}&commitSha=${CI_COMMIT_SHA}&ref=${CI_COMMIT_REF_NAME}&name=${CI_PROJECT_PATH_SLUG}-${CI_COMMIT_REF_SLUG}"
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://bigbro.thecodingmachine.com/environment/${CI_PROJECT_PATH_SLUG}-${CI_COMMIT_REF_SLUG}
when: manual
only:
- /^CD-.*$/
the image thecodingmachine/deeployer:latest needs a variable named KUBE_CONFIG_FILE which contains the kubernetes config file that gives you access to the cluster. In the case of the example we setted it as a CI/CD variable of gitlab since the feature is still available.
variables:
KUBE_CONFIG_FILE: ${KUBE_CONFIG}
Next, in the script section of the job we just use the command apply of deeployer-k8s with the mandatory option --namespace which in the case of the example is setted thanks to CI/CD variables. Since Deeployer is bundled as a Docker image, usage in Gitlab CI is very easy (assuming you are using a Docker based Gitlab CI runner, of course);
.gitlab-ci.yaml
stages:
# Your other stages ...
- deploy
- cleanup
deploy_branches:
image: thecodingmachine/deeployer:latest
stage: deploy
script:
- deeployer-k8s apply --namespace ${CI_PROJECT_PATH_SLUG}-${CI_COMMIT_REF_SLUG}
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://${CI_COMMIT_REF_SLUG}.${CI_PROJECT_PATH_SLUG}.test.yourapp.com
on_stop: cleanup_branches
only:
- branches
cleanup_branches:
stage: cleanup
image: thecodingmachine/deeployer:latest
variables:
GIT_STRATEGY: none
script:
- deeployer-k8s delete --namespace ${CI_PROJECT_PATH_SLUG}-${CI_COMMIT_REF_SLUG}
when: manual
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
only:
- branches
except:
- master
For this to work, you will need to put the content of your Kubernetes configuration file in a KUBE_CONFIG_FILE
environment variable in your project on Gitlab.
If you are connecting to a Google Cloud cluster, instead of passing a KUBE_CONFIG_FILE
, you will need to pass
this set of environment variables:
GCLOUD_SERVICE_KEY
GCLOUD_PROJECT
GCLOUD_ZONE
GKE_CLUSTER
Deeployer comes with a Github action.
deploy_workflow.yaml
name: Deploy Docker image
on:
- push
jobs:
deeploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Deploy
uses: thecodingmachine/deeployer-action@master
env:
KUBE_CONFIG_FILE: ${{ secrets.KUBE_CONFIG_FILE }}
with:
namespace: target-namespace
You will need to put the content of your Kubernetes configuration file in the KUBE_CONFIG_FILE
secret on Github.
If you are connecting to a Google Cloud cluster, instead of passing a KUBE_CONFIG_FILE
, you will need to pass
this set of environment variables:
GCLOUD_SERVICE_KEY
GCLOUD_PROJECT
GCLOUD_ZONE
GKE_CLUSTER
If you are using a private registry to store your Docker images, Deeployer needs the credentials to this registry in order to deploy images successfully.
Put the credentials to your images in the "config" section:
deeployer.json
{
"config": {
"registryCredentials": {
"registry.example.com": {
"user": "my_user",
"password": "my_password"
}
}
}
}
Please note that the key of the "registryCredentials" object is the URL to your Docker private registry.
These will be automatically passed to Kubernetes that will create a "registry secret".
Download and install the Jsonnet Bundler: https://github.com/jsonnet-bundler/jsonnet-bundler/releases
Install the dependencies:
$ jb install
Download and install Tanka: https://github.com/grafana/tanka/releases
Download and install Kubeval: https://kubeval.instrumenta.dev/installation/
Download and install AJV:
$ sudo npm install -g ajv-cli@^5
$ sudo npm install -g ajv-formats@^2.1.1
Download and install Jsonlint:
$ sudo npm install -g jsonlint
Before submitting a PR:
cd tests/
./run_tests.sh
./lint.sh