This is the repository where we define the manifest to be run in our Kubernetes cluster.
We use Flux as the basis for our GitOps workflow.
IaC and Bootstrap - https://github.com/equinor/sdp-omnia
In essence we create manifests to be run on the Kubernetes cluster, commit them to this repository and then the Flux controller notices the new commit and applies all the YAML files (can be simplified down to kubectl apply -f FILENAME
).
We have integrated Kustomize support with Flux. This means that the /base folder contains common configuration for all our clusters. Any changes between the clusters (mostly DNS config), are made in "patches" to the base file found in the /dev and /prod folders. Different cluster's Flux operator are subscribed to a specific repo, branch, and kustomize-path-path for an effective GitOps workflow.
Typically we set base equals to the values of prod, and patches in dev are mostly used for overwriting ingresses.
Deleting helm releases is currently not done through Flux.
1) Firstly, remove references of the chart from the Git repo. Remember to update the kustomization.yaml files.
2) Then eiher delete the entire namespace of the helm release you wish to remove kubectl delete helmrelease xxx -n yyy
By utilizing Kustomize, we are able to use the same manifests in different clusters, with context aware patches. The biggest benefit of this is that we can use a standard git branch strategy, which involves pushing changes to a dev branch, and merging into a prod branch when it's tested OK in dev. For optimal GitOps workflow! :+1
This simplistic branching model will likely not work in the same way for larger teams, but for our use case it works well. Usually the flow is
Sometimes you make changes to the dev branch, but are not ready to merge into prod yet, but your team member wants to merge something else in. In these cases, either move your unready changes to a separate branch, and let the other team member merge dev into prod. Alternatively you can keep the changes, intermix commits to dev with your team member, before making a PR, your team member should now take his changes to a separate branch (which has now been tested through the dev branch), and be ready to merge from the feature branch into prod.
The custom-charts
folder contains the charts we have created ourselves and and is needed for our cluster.
The creation of new namespaces is done by creating a new file with the same name as the namespace(.yaml) and place it under the folder namespaces
.
All other k8s manifests are placed in the base
folder, organized by namespace and service.
We keep one k8s resource in each file, and name the file the type of k8s resource it is. So an ingress resource for the demo application, in the namespace apps will be named; ./base/apps/demo/ingress.yaml
Sometimes we need to store secrets in the Git repository to make sure our repository is the primary source of truth. In these cases we use a tool called Sealed Secrets. With this we can store secrets enrypted in the repository and be sure that Flux manages and puts them in the Kubernetes cluster.
Note that some secrets must be in place before deployments such as flux are in place. These are created in the sdp-omnia repo, with secrets stored to Azure Keyvault.
kubeseal-linux-amd64
or kubeseal-darwin-amd64
in your path and rename it to kubeseal
.After first run we need to export the private and public key. The public key is what we all use to encrypt the secrets and the private key is used by the controller to decrypt the secrets and put them in the cluster. It is important to have a backup of the private key, but NEVER commit that to the repository.
To get the private key (remember to store this file somewhere safe so we can restore in case of emergency)
kubectl get secret -n sealed-secrets sealed-secret-custom-key -o yaml > sealedsecrets.yaml
Use the kubeseal tool to get the public key
kubeseal --controller-namespace sealed-secrets --controller-name sealed-secrets --fetch-cert > secret.pem
You need to use the kubectl to initially create the secret and then pipe this to kubeseal to make it encrypted.
# From literal
kubectl -n <NAMESPACE> create secret generic <SECRETNAME> --namespace=<NAMESPACE> --dry-run=client --from-literal=<KEY>=<VALUES> -o json | kubeseal --format yaml --cert sealed-secret.pem > sealed-secret.yaml
# From file
kubectl -n <NAMESPACE> create secret generic <SECRETNAME> --namespace=<NAMESPACE> --dry-run=client --from-file=<FILENAME> -o json | kubeseal --cert sealed-secret.pem --format yaml > sealed-secret.yaml
# TLS secret
# Remember to include intermediate certificates if any (goes on the end of the .crt file)
kubectl create secret tls <SECRETNAME> --namespace=<NAMESPACE> --key myTLSCert.key --cert myTLSCert.crt --dry-run=client -o json | kubeseal --format yaml --cert sealed-secret.pem > sealed-secret-tls.yaml
# Validate that the SS controller can decrypt the sealed-secret
cat ./sealed-secret-tls.yaml |kubeseal --controller-namespace sealed-secrets --controller-name sealed-secrets --validate
Make sure to place the secrets in the appropriate namespace folder to keep the repository organised.
To utilize oauth2-proxy to authenticate users before they can access a web application, add these lines to the ingress annotation:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
You also need a host specific ingress for the proxy. That could look like this.
There is also a few more things to note;
Each oath2 HelmRelease need a secret containing there keys;
The cookie secret can be created like this;
docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));'`
You only need to copy the content after "b'".
How to do something we do a lot? If you know, type them up here and we shall all be the wiser for it.
We assume the ACR has been created and set up in the bootstrapping portion of the Kubernetes cluster. Take a look in sdp-aks repository for information on how to create and set up a new ACR. To use the ACR we need a service principal that should have been created by ARM templates.
Start by creating the secret which stores docker registry information;
kubectl -n <NAMESPACE> create secret docker-registry <SECRET_NAME> --docker-server=<REGISTRY_URL> --docker-username=<SERVICE_PRINCIPAL_ID> --docker-password=<SERVICE_PRINCIPAL_PASSWORD> --docker-email=gm_sds_rdi@equinor.com
To use this secret in the image pull, use it in a manifest like so;
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: REGISTRY_URL/<your-private-image>
imagePullSecrets:
- name: SECRET_NAME
We use CertManager for creating, validating and deploying Let'sEncrypt certificates. CertManager can shortcut the usual procedure of creating a Certificate resource and then referencing the secret from this Certificate in the ingress by adding a few annotations to the ingress.
In the shortest terms, add these annotations to your ingress
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
If you want a Let'sEncrypt testing certificate (to not expend the cert quota), you can specify another certificate issuer. For a cluster issuer add this:
certmanager.k8s.io/cluster-issuer: "letsencrypt-staging"
This works because we have defined a default issuer and protocol when we deployed CertManager.
We have implemented External DNS with the Kubernetes cluster and connected it to the Azure DNS Zone. This is done with a Azure AD Service Principal that has rights to only change this DNS Zone.
External DNS updates DNS Zone entries with the help of annotations in ingress resources. To create and update a DNS entry use the following annotation
external-dns.alpha.kubernetes.io/hostname: demo.example.com.
This example manifest uses the techniques in the two previous sections and shows how these works together to automate some tedious tasks. More info can be found on HelmRelease, in this example we use a git repository for the Helm chart, but you could use a Helm repository.
---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: sdp-demo
namespace: prod
annotations:
flux.weave.works/automated: "true"
spec:
releaseName: sdp-demo
chart:
git: ssh://git@github.com/equinor/sdp-flux.git
ref: prod
path: charts/sdp-demo
values:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
external-dns.alpha.kubernetes.io/hostname: demo.example.com.
hosts:
- demo.example.com
tls:
- secretName: demo-tls
hosts:
- demo.example.com