The cloud engineer's toolbox.
CloudControl is a Docker based configuration environment containing all the tools required and configured to manage modern cloud infrastructures.
The toolbox comes in different "flavours" depending on what cloud you are working in. Currently supported cloud flavours are:
Following features and tools are supported:
kc
feature will be deprecated in one of the next major CloudControl versions. As an alternative, configure
the krew
feature and use the cs plugin:
To make the transition as easy as possible, you can also configure the run
feature and set an alias to
kubectl cs
:
- USE_run=yes
- RUN_COMMANDS="alias kc='kubectl cs'"
See [https://github.com/dodevops/kc](the kc website) for details.
CloudControl can be used best with docker-compose. Check out the sample
directory in a flavour for a sample
compose file and to convenience scripts. It includes a small web server written in Go and Vuejs-client dubbed
"CloudControlCenter", which is used as a status screen. It listens to port 8080 inside the container.
Copy the compose file and configure it to your needs. Check below for configuration options per flavour and feature.
Run init.sh
. This script basically just runs docker-compose up -d
and tells you the URL for CloudControlCenter.
Open it and wait for CloudControl to finish initializing.
The initialization process will download and configure the additional tools and completes with a message when its done. It will run each time when the stack is recreated.
After the initialization process you can simply run docker-compose exec cli /usr/local/bin/cloudcontrol run
to jump
into the running container and work with the installed features.
If you need to change any of the configuration environment variables, rerun the init script afterwards to apply the changes. Remember, that CloudControl needs to reininitialize for this.
There are two ways to configure a feature and the version it should use. The first way is to use the given
USE_[feature name]=yes
environment variable and specifying the version with [FEATURE NAME]_VERSION=[version]
.
If there are multiple features configured, this can get a bit messy. Another approach is to use the FEATURES
environment variable and list the features and optionally the version like this:
FEATURES=kubernetes helm:3.5.1 terraform:1.1.9
This would install the version 3.5.1 of Helm and version 1.1.9 of terraform. (Kubernetes uses the flavour's provided version of kubectl, e.g. using az aks install-cli)
Note: Please see the feature documentation below if a feature supports specifying a version string. All version strings need to be provided in semver format (i.e. 1.2.3), the feature installers will take care about prefixes for download URLs, if required.
CloudControl is targeted to run on a local machine. It requires the following features to work:
Some Kubernetes distributions such as Rancher desktop support this and can be used to run CloudControl.
The sample
directories of each flavour provide an example Kubernetes configuration based on a deployment and a
service. They were preliminary tested on Rancher desktop.
Modify them to your local requirements and then run
kubectl apply -f k8s.yaml
to apply them.
This will create a new namespace for your project and a deployment and a service in that. Check
kubectl get -n [project] pod
to watch the progress until a cli pod has been created.
Use kubectl get -n [project] svc cli
to see the bound ports for the cli service and use your browser to connect
to the CloudControlCenter instance.
After the initialization is done, use kubectl -n [project] exec -it deployment/cli -- /usr/local/bin/cloudcontrol run
to enter CloudControl.
Warning: This implementation is currently a preview feature and hasn't been tested thoroughly. It highly depends on the proper support for host based volumes and networking of the Kubernetes distribution. Please refer to the documentation and support of your Kubernetes distribution if something isn't working.
Apparently you're using CloudControl on a system for which no specific image exist. Some cloud providers have not provided base images for all architectures (e.g. the Apple ARM-based processors) yet. See the list of flavours above for the available platforms per flavour.
As a workaround this, you can use the platform
parameter for docker-compose or the --platform
parameter for docker run
to specify a compatible architecture
(e.g. linux/amd64 on Apple ARM-based machines).
If you want to display a custom login message when users enter the container, set environment variable MOTD
to that message. If you want to display the default login message as well, also
set the environment variable MOTD_DISPLAY_DEFAULT
to yes.
If you'd like to forward traffic into a cluster using kubectl port-forward
you can do the following:
Add a ports key to the cli-service in your docker-compose file to forward a free port on your host to a defined port in your container. The docker-compose-files in the sample directories already use port 8081 for this.
Inside CloudControl, check the IP of the container:
bash-5.0$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:02
inet addr:172.21.0.2 Bcast:172.21.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:53813 errors:0 dropped:0 overruns:0 frame:0
TX packets:20900 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:75260363 (71.7 MiB) TX bytes:2691219 (2.5 MiB)
kubectl port-forward --address 172.21.0.2 svc/my-service 8081:8080
docker-compose port cli 8081
If you'd like to set up aliases to save some typing, you can use the run feature. Run your container with these environment variables:
USE_run=yes
: Set up the run featureRUN_COMMANDS=alias firstalias=command;alias secondalias=command
: Set up some aliasesFirst, mount your .ssh directory into the container at /home/cloudcontrol/.ssh.
Also, to not enter your passphrase every time you use the key, you should mount the ssh agent socket into the container and set the environment variable SSH_AUTH_SOCK to that path. CloudControl will automatically fix the permissions of that file so the CloudControl user can use it.
Here are snippets for your docker-compose file for convenience:
(...)
volumes:
- "[Path to .ssh directory]:/home/cloudcontrol/.ssh"
# for Linux:
- "${SSH_AUTH_SOCK}:/ssh-agent"
# for macOS:
- "/run/host-services/ssh-auth.sock:/ssh-agent"
environment:
- "SSH_AUTH_SOCK=/ssh-agent"
Because of how CloudControl is designed it uses a defined user named "cloudcontrol", so Terraform state lock messages look like this:
Error: Error locking state: Error acquiring the state lock: storage: service returned error: StatusCode=409, ErrorCode=LeaseAlreadyPresent, ErrorMessage=There is already a lease present. RequestId:56c21b95-501e-0096-7082-41fa0d000000 Time:2021-05-05T07:41:25.9164547Z, RequestInitiated=Wed, 05 May 2021 07:41:25 GMT, RequestId=56c21b95-501e-0096-7082-41fa0d000000, API Version=2018-03-28, QueryParameterName=, QueryParameterValue= Lock Info: ID: a1cef2cc-fec4-1765-4da8-d068a729ba7e Path: path/terraform.tfstate Operation: OperationTypeApply Who: cloudcontrol@5c47a37f920b Version: 0.12.17 Created: 2021-05-05 07:38:01.188897776 +0000 UTC Info:
It's hard to identify from that who the other CloudControl user is, that may have opened a lock. The system user can't be changed, but it's possible to set a better hostname than the one Docker autogenerated.
See this docker-compose snippet on how to set a better hostname:
version: "3"
services:
cli:
image: "dodevops/cloudcontrol-azure:latest"
hostname: "[TODO yourname]"
volumes:
(...)
If you set hostname in that snippet to "alice", the state lock will look like this now:
Error: Error locking state: Error acquiring the state lock: storage: service returned error: StatusCode=409, ErrorCode=LeaseAlreadyPresent, ErrorMessage=There is already a lease present. RequestId:56c21b95-501e-0096-7082-41fa0d000000 Time:2021-05-05T07:41:25.9164547Z, RequestInitiated=Wed, 05 May 2021 07:41:25 GMT, RequestId=56c21b95-501e-0096-7082-41fa0d000000, API Version=2018-03-28, QueryParameterName=, QueryParameterValue= Lock Info: ID: a1cef2cc-fec4-1765-4da8-d068a729ba7e Path: path/terraform.tfstate Operation: OperationTypeApply Who: cloudcontrol@alice Version: 0.12.17 Created: 2021-05-05 07:38:01.188897776 +0000 UTC Info:
CloudControl uses the
official guide to install kubectl on an RPM-based system.
However, Google seems to
regularly have problems with its key-signing in the used repository,
so we added a workaround to this problem. If you add the environment variable AWS_SKIP_GPG=1
to your
docker-compose.yaml, it will ignore an invalid GPG key during the installation.
Please note though, that this affects the security of the system and should not be used constantly.
Use the docker logs
command with the failed container to see the complete log output. You can enhance the log by using the
"DEBUG_[feature]" options or add the environment variable "DEBUG_FLAVOUR" to turn on the debug log for the flavour
installation.
If you are really stuck, you can convince the container to keep running by setting "CONTINUE_ON_ERROR=yes" as an environment variable in the docker-compose file. Then you can debug with the running container.
Can be used to connect to infrastructure in the AWS cloud. Also see the AWS CLI documentation for more configuration options.
If you have activated MFA, set AWS_MFA_ARN to the ARN of your MFA device so CloudControl will ask you
for your code.
To start a new session in the CloudControl context, run createSession <token>
afterwards
Can be used to connect to infrastructure in the Azure cloud. Because we're using a container, a device login will happen, requiring the user to go to a website, enter a code and login.
The azure login tokens usually expire after some time. You can run the azure-relogin
script
(located in ~/bin, thus available without path) to re-execute the same login commands as the
initialization process does.
Includes workflows and tools to connect to Google Cloud.
Authentication requires the following:
Can be used to connect to infrastructure outside of a specific cloud provider.
Includes workflows and tools to connect to a Tanzu cluster.
The kubernetes login tokens usually expire after a few hours already. You can run the k8s-relogin
script
(located in ~/bin, thus available without path) to re-execute the same login commands as the initialization process
does.
Installs and configures the Fish Shell with configured Spacefish theme
Installs AzCopy
Adds specified trusted certificate authorities into the container
volumes:
section of docker compose like this:
(...)
volumes:
/certificates
. If something different than the default is used, the volume-target needs to be adapted to
the same directoryAdds specified trusted certificate authorities into the container
volumes:
section of docker compose like this:
(...)
volumes:
/certificates
. If something different than the default is used, the volume-target needs to be adapted to
the same directoryInstalls the cert-manager Command Line Tool
Installs Direnv
Installs Helm
Installs the JSON parser and processor jq
Installs k9s
kc
feature will be deprecated in one of the next major CloudControl versions. As an alternative, configurethe krew
feature and use the cs plugin:
- USE_krew=yes
- KREW_VERSION=<current version of krew>
- KREW_PLUGINS=cs
To make the transition as easy as possible, you can also configure the run
feature and set an alias to
kubectl cs
:
- USE_run=yes
- RUN_COMMANDS="alias kc='kubectl cs'"
See [https://github.com/dodevops/kc](the kc website) for details.
Installs kc, a quick context switcher for kubernetes.
Installs Krew
Installs kubectl node-shell
Installs and configures kubernetes with kubectl to connect to the flavour's kubernetes clusters
k8s-relogin
which allows you to recreate the Kubernetes credentials.(aws flavour) Environment AWS_K8S_CLUSTERS: A comma separated list of EKS clusters to manage inside CloudControl (only for aws flavour). For each cluster give the cluster name. If you need to assume an ARN role, add that to the clustername with an additional | added. For example: myekscluster|arn:aws:iam::32487234892:sample/sample
If you additionally need to assume a role before fetching the EKS credentials, add the role, prefixed with an @: myekscluster|arn:aws:iam::4327849324:sample/sample@arn:aws:iam::specialrole
(tanzu flavour)
This generates the script k8s-relogin
which allows you to recreate the Kubernetes credentials.
Installs additional packages into the container
Installs Packer
Runs commands inside the shell when entering the cloud control container
Installs sops
Installs stern, a multi pod and container log tailing for Kubernetes
Installs the Tanzu CLI
Installs and configures Terraform
volumes:
section of docker compose like this:
(...)
volumes:
/terraform
. If something different
than the default is used, the volume-target needs to be adapted to the same directory/terraform
. If something
different than the default is used, the volume-target needs to be adapted to the same directoryIf you used the browser based login in gcloud, you'll probably need to authenticate using the application-default login using the gcloud cli by running
gcloud auth application-default login
Installs Terragrunt
Configures the container's timezone
Installs the Velero kubernetes backup CLI
Installs Vim
Installs the YAML parser and processor yq
CloudControl supports a decoupled development of features and flavours.
If you're missing a feature, just fork this repository, copy the feature template from features/.template into a new subfolder, check out the comments in the example files, and modify them to your needs.
These files make up a feature:
feature.yaml
: A descriptor for your feature with a title, a description and configuration notesinstall.sh
: A shell script that is run by CloudControlCenter and should install everything you need
for your new featuremotd.sh
: (optional) If you want to show some information to the users upon login, put them here.And optional, but recommended integration tests in a .goss
folder.
If you need another flavour (aka cloud provider), add a new subdirectory under "flavour" and add a flavour.yaml describing your flavour the same way as a feature. For the rest of the files, please check out existing flavours for details. Please, include a sample configuration for your flavour to make it easier for other people to work with it.
The feature.yaml
is a descriptor file used to automatically create this documentation. It includes a "configuration"
key, that should be used to inform the user of ways to configure the feature. Usually, this is done using
environment variables.
It is recommended to use prefixed variables for the feature. For example, when creating a feature called "myfeature", use environment variables prefixed with "MYFEATURE_" to circumvent the problem of accidentally sharing configuration variables with another feature or a flavour.
Additionally, please add enough information to the configuration array of your feature so the user knows what values
to set for the specific environment variable. If a configuration option is required, please state this in the
environment declaration using (required)
after its name and check if the variable is set in your installation script
and break accordingly if not.
If your feature needs a version specification, set "requiresVersion" to true in the feature descriptor. This will
enable the use of an environment variable [FEATURE NAME]_VERSION
. This variable is also filled if the CloudControl
user uses the FEATURES-variable approach to enable features.
In your install script, you can source the utils library
. /feature-installer-utils.sh
Installation scripts usually echo out some kind of progress, execute something and have to check for errors. The command
execHandle
does all this in a one liner:
execHandle "Progress message" command
This will print out "Progress message...", run the command and if it exits with a non-zero status code, it will print the output of the command and exit with status code 1.
Using this makes installer script way shorter and easier to maintain.
To validate if your feature correctly installs on a target flavour, goss is used.
goss can test various things you specify in a yaml formatted file. The CloudControl test runner expects a subdirectory
goss
in your feature, which can hold these files:
\{\{ if eq .Env.FLAVOUR "aws" }}(... aws tests)\{\{ end }}
This subdirectory is mounted as /goss-sup into your container. Additionally, another directory which contains flavour-specific supplemental data (such as access keys) is mounted as /flavour. You can use these two directories in environment variables and tests to build the required environment for your test.
Warning: When doing an integration test of all features, the test runner copies the contents of all supplemental paths into one directory. Make sure to provide unique filenames for supplemental files for your feature, so they're not overwritten by a file from another feature in that stage.
The test runner runs tests from all subdirectories starting with "goss", so you can add multiple directories to test multiple variations of your feature.
If your feature only supports specific flavours, add a test
key to your feature.yaml
and under that a 'flavours'
key and list the available flavours. In this example the flavour "some flavour" will only be tested with the gcloud
flavour:
title: "some flavour"
test:
flavours:
- gcloud
Build a flavor container image with the base of the repository as the build context like this:
build.sh [tag] [flavour]
To build all flavours with the same tag, use
build.sh [tag]
To run the test suite for a specific flavour, you need to create a local directory that holds flavour-specific data (e.g. keys for authentication) and optionally an .env-file with flavour-specific environment variables. This is called a "testbed" directory.
First, you need to compile the test runner:
docker run --rm -e GOOS=[os, e.g. darwin, linux, windows] -e GOARCH=[architecture, e.g. arm64, amd64] -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.19-alpine go build -o test-features cmd/tests/test-features
After that, download the latest goss binary for the target architecture you will test (linux/amd64 or linux/arm64) from the Goss site and put it somewhere local.
Once that is done, run the tests like following:
./test-features -f [flavour] -i [image:tag] -t [path to testbed directory] -p [test architecture, e.g. linux/amd64] -g [path to the goss binary]
This will run the tests of all features that supply a test suite one by one and, if all succeed, will test all
features together for integration testing. Check out test-features --help
for other options.
If you'd like to test if a specific configuration fails to install, create a goss subdirectory and only put a file
named .will-fail
into it.
When the testrunner encounters such file it will check if CloudControl fails to complete initialization.
You can add a regular expression pattern into .will-fail
to test if the container or command output matches it.
As we're dealing with a lot of moving targets in the features, sometimes a test might not be reliable. For these situations we support a .might-fail file. Just add it as a text file into the test suite subdirectory and put some text into it describing the problem. Failed test won't fail the test suite then but instead the description will be shown.
To check why a test failed, use the -l parameter to enable debug logging. Additionally, you can use the -n parameter to specify the specific feature to test and use the -x parameter to stop testing if one test fails.
When a test fails, the test container will not be removed automatically (unless you specified the -c parameter), so you can inspect the failing container as well.
To rebuild this documentation, first compile the documentation maker:
docker run --rm -e GOOS=[os, e.g. darwin, linux, windows] -e GOARCH=[architecture, e.g. arm64, amd64] -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.19-alpine go build cmd/doc/mkdoc.go
Then run it to rebuild README.md based on README.md.gotmpl:
./mkdoc
This repository includes different workflows to test and automate PRs and Pushes. The following workflows are used:
flowchart TD
A[Every Push] --> B[Update documentation]
D[Every PR] --> E[Check commits]
D --> F[Run Testsuite]
G[Push to Main] --> H[Generate Changelog and Release]
G --> C
I[Push to Develop] --> C[Build images]
click B "https://github.com/dodevops/cloudcontrol/blob/develop/.github/workflows/docs.yml" "Docs workflow"
click C "https://github.com/dodevops/cloudcontrol/blob/develop/.github/workflows/image.yml" "Image workflow"
click E "https://github.com/dodevops/cloudcontrol/blob/develop/.github/workflows/check_commits.yml" "Check workflow"
click F "https://github.com/dodevops/cloudcontrol/blob/develop/.github/workflows/test.yml" "Test workflow"
click H "https://github.com/dodevops/cloudcontrol/blob/develop/.github/workflows/release.yml" "Release workflow"
```mermaid