This started as a simple use case to discover docker
and docker-compose
π³ :
I also setup deployments on a local kubernetes βΈοΈ and tests are running on CircleCI on each push.
You are a true developer? You don't RTFM? After all, this is why we have docker ... not to bother with all the boring setup/install steps ... π
git clone https://github.com/topheman/docker-experiments.git
cd docker-experiments
docker-compose up -d
You are good to go with a development server running at http://localhost:3000, with the front in react, the api in go and everything hot reloading. π
Try to take a few minutes to read the doc bellow ... π
You need to have installed:
git clone https://github.com/topheman/docker-experiments.git
A Makefile is available that automates all the commands that are described bellow. For each section, you'll find the related commands next to the π emoji.
Just run make help
to see the whole list.
docker-compose up -d
This will create (if not already done) and launch a whole development stack, based on docker-compose.yml, docker-compose.override.yml, api/Dockerfile and front/Dockerfile - following images:
topheman/docker-experiments_front_development
: for react development (based on nodejs image)topheman/docker-experiments_api_development
: for golang in development mode (using fresh to build and restart the go webserver when you change the sources)
services.api.command
entry in docker-compose.override.yml will override the default RUN
command and start a dev server (instead of running the binary compiled in the container at build time)Go to http://localhost:3000/ to access the frontend, you're good to go, the api is accessible at http://localhost:5000/.
π make dev-start
, make dev-start-d
, make dev-stop
, make dev-ps
, make dev-logs
, make dev-logs-front
, make dev-logs-api
docker-compose run --rm -e CI=true front npm run -s test && docker-compose run --rm api go test -run ''
π make test
, make test-front
, make test-api
This section is about testing the production images with docker-compose π³ (check the deployment section to deploy with kubernetes locally).
Make sure you have built the frontend with docker-compose run --rm front npm run build
, then:
docker-compose -f ./docker-compose.yml -f ./docker-compose.prod.yml up --build
Note: make sure to use the --build
flag so that it will rebuild the images if anything changed (in the source code or whatever), thanks to docker images layers, only changes will be rebuilt, based on cache (not the whole image).
This will create (if not already done) and launch a whole production stack:
topheman/docker-experiments_api_production
: for the golang server (with the app compiled) - containing only the binary of the golang app (that way the image)topheman/docker-experiments_nginx
: which will:
/front/build
)/api
requests to http://api:5000
(the docker subnet exposed by the golang api container)Access http://localhost and you're good to go.
π make prod-start
, make prod-start-d
, make prod-start-no-rebuild
, make prod-start-d-no-rebuild
, make prod-stop
, make prod-ps
, make prod-logs
, make prod-logs-front
, make prod-logs-api
This section is about deploying the app locally with kubernetes βΈοΈ (not tested with a cloud provider). To stay simple, there aren't any TLS termination management (only port 80 exposed).
Local kubernetes server and client:
kubectl
(kubernetes client)The files descripting the deployments are stored in the deployments folder. You will find two files, each containing the deployment and the service.
1) If you haven't built the frontend, run docker-compose run --rm front npm run build
2) Build the production images:
docker build ./api -t topheman/docker-experiments_api_production:1.0.1
docker build . -f Dockerfile.prod -t topheman/docker-experiments_nginx:1.0.1
Note: They are tagged 1.0.1
, same version number as in the deployments files (want to put an other version number ? Don't forget to update the deployment files). For the moment, I'm not using Helm that let's you do string interpolation on yml files.
3) Create your pods and services
Make sure nothing is up on port 80
, then:
kubectl create -f ./deployments/api.yml -f ./deployments/front.yml
You're good to go, check out http://localhost
To stop and delete the pods/services you created:
kubectl delete -f ./deployments/api.yml -f ./deployments/front.yml
They won't stop right away, you can list them and see their status with:
kubectl get pods,services
π make kube-start
, make kube-start-no-rebuild
, make kube-stop
, make kube-ps
Thanks to docker multi-stage builds, the golang application is built in a docker golang:alpine image (which contains all the tooling for golang such as compiler/libs ...) and produces a small image with only a binary in an alpine image (small Linux distrib).
The targets for multi-stage build are specified in the docker*.yml
config files.
The api/Dockerfile will create such a production image by default.
You can tell the difference of weight:
docker images
topheman/docker-experiments_api_production latest 01f1b575fae6 About a minute ago 11.5MB
topheman/docker-experiments_api_development latest fff1ef3ec29e 8 minutes ago 426MB
topheman/docker-experiments_front_development latest 4ed3aea602ef 22 hours ago 225MB
In development, the api server in golang is available at http://localhost:5000 and proxied onto http://localhost:3000/api (the same port as the front, thanks to create-react-app proxy).
In production mode, we only want the golang server to be available via /api
(we don't want to expose it on it's own port).
To make it work:
api
- see docker-compose.yml.api
- see deployments/api.ymlThat way, the nginx conf can work with both docker-compose AND kubernetes, proxying http://api
- see nginx/site.conf.
If your app exits with a failure code (greater than 0) inside the container, you'll want it to restart (like you would do with pm2 and node apps).
With docker-compose/production, the, directive restart: on-failure
in the docker-compose.yml file will ensure that. You'll be able to check it by clicking on the "exit 1 the api server" button, which will exit the golang api. You'll see that the uptime is back counting from 0 seconds.
With kubernetes/deployment, I setup 2 replicas of the api server, so when you retrieve the infos, the hostname might change according of the api pod you're balance on.
Exiting one pod won't break the app, it will fallback on the remaining replica. If you exit the two pods, you'll get an error retrieving infos, until one of the pod is back up by kubernetes (check their status with kubectl get pods
).
docker-compose run --rm front npm run test
: launch a front container in development mode and run testsdocker-compose -f ./docker-compose.yml run --rm api <cmd>
: launch an api container in production mode and run <cmd>
docker-compose down
: stop and remove containers, networks, volumes, and images created by docker-compose up
Don't want to use docker-compose
(everything bellow is already specified in the docker*.yml
files - only dropping to remember the syntax for the futur) ?
docker build ./api -t topheman/docker-experiments_api_production:1.0.1
: build the api
and tag it as topheman/docker-experiments_api_production:1.0.1
based on api/Dockerfiledocker run -d -p 5000:5000 topheman/docker-experiments_api_production:1.0.1
: runs the topheman/docker-experiments_api_production:1.0.1
image previously created in daemon mode and exposes the portsdocker build ./front -t topheman/docker-experiments_front_development:1.0.1
: build the front
and tag it as topheman/docker-experiments_front_development:1.0.1
based on front/Dockerfiledocker run --rm -p 3000:3000 -v $(pwd)/front:/usr/front -v front-deps:/usr/front/node_modules topheman/docker-experiments_front_development:1.0.1
:
topheman/docker-experiments_front_development:1.0.1
image previously created in attach mode--rm
)docker rmi $(docker images -q --filter="dangling=true")
: remove dangling images (layers that have no more relationships to any tagged image. Tagged as kubectl create -f ./deployments/api.yml -f ./deployments/front.yml
: creates the resources specified in the declaration fileskubectl delete -f ./deployments/api.yml -f ./deployments/front.yml
: deletes resources specified in the declaration fileskubectl scale --replicas=3 deployment/docker-experiments-api-deployment
: scales up the api through 3 podsI had the following error on my first build:
ERROR: Version in "./docker-compose.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version ("2.0", "2.1", "3.0", "3.1", "3.2") and place your service definitions under the
services
key, or omit theversion
key and place your service definitions at the root of the file to use version 1.For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/
The reason was because I'm using docker-compose file format v3.4, which doesn't seem to be supported by the version of docker-engine used on the default setup of CircleCI - see compatibility matrix.
With CircleCI, in machine executor mode, you can change/customize the image your VM will be running (by default: circleci/classic:latest
) - see the list of images available. I simply changed the image to use:
version: 2
jobs:
build:
- machine: true
+ machine:
+ image: circleci/classic:201808-01
Checkout .circleci/config.yml
Note: Why use docker-compose file format v3.4 ? To take advantage of the target
attribute.
You can not build Docker within Docker.
To build/push docker images, you have two solutions on CircleCI:
create-react-app ships with a service worker by default which implementation is based on sw-precache-webpack-plugin (a Webpack plugin that generates a service worker using sw-precache that will cache webpack's bundles' emitted assets).
It means that a service-worker.js
file will be created at build time, listing your public static assets that the service worker will cache using a cache first strategy (on a request for an asset, will first hit the service worker cache and serve it, then call the network and update the cache - this makes the app fast and offline-first).
From the create-react-app doc:
On a production build, and in a browser that supports service workers, the service worker will automatically handle all navigation requests, like for
/todos/42
or/api
, by serving the cached copy of yourindex.html
. This service worker navigation routing can be configured or disabled byejecting
and then modifying thenavigateFallback
andnavigateFallbackWhitelist
options of theSWPreachePlugin
configuration.
The next thing that will be comming are:
/api
npm install
at the root of the project)This is still in progress.
More bookmarks from my research: