Several design goals were set in place before this started, and they have evolved somewhat as the work has gone on.
k8s
module, except where it is impractical, to best support declarative configuration and enable fault tolerance and recovery from failures.shell
module), effort should be made to make it idempotent and report appropriate status back to Ansible.
(Any failures at all due to network conditions, timing problems, etc. should be 100% resolvable by running the exact same playbook without introducing new errors)Right now, run.sh
is the primary mechanism for executing the playbooks. It provides the appropriate context and control for chaining playbooks together locally, as well as ensuring that prerequisites for the roles and playbooks are implemented on your system. You can use run.sh
on unsupported platforms by ensuring the dependencies are met before running it, or you can use it on a supported platform to have it do the work for you.
run.sh
was developed on Fedora 30 and 31. All modern testing of it has been done on Fedora 32. The most important part of satisfying dependencies automatically is that you have dnf
available and a Fedora-like package naming convention. This means that it should operate as expected on RHEL 8, but this has not been tested for validity.
The requirements for running the playbooks and roles have been mostly consolidated into runreqs.json
and the hashmap should be relatively self-explanitory. If the binaries listed are in your $PATH
, run.sh
will not attempt to install them. If they are not, run.sh
will attempt to install them using dnf
with sudo
. This enables running on arbitrary alternative *NIX platforms, if the binaries are in your path. The absolute most basic requirements are python
and jq
, and they are not included in runreqs.json
as they will not be changing based on the workshop content, but must exist prior to other dependency setup. run.sh
will attempt to install them both, as well as pip
in user mode, if they are not available in $PATH
.
To run the playbooks yourself, using ansible-playbook
and without run.sh
, jq
is not required but the other binaries in the dnf
key as well as the Python libraries in the pip
key of runreqs.json
are all required.
This should work even on a Mac!
run-container.sh
has been developed to use the Dockerfile present to run the playbooks and roles inside a RHEL 8 UBI container image. This means you can use run-container.sh to package a new container image on the fly with your changes to the repository, satisfying dependencies, and then map tmp and vars in to the container. In order to enable multiple clusters being run with multiple containers, run-container.sh
requires some alternative variables to be set.
usage: run-container.sh [-h|--help] | [-v|--verbose] [(-e |--extra=)VARS] \
(-c |--cluster=)CLUSTER [-k |--kubeconfig=)FILE \
[[path/to/]PLAY[.yml]] [PLAY[.yml]]...
You should specify -c CLUSTER
or --cluster=CLUSTER
to define a container-managed cluster with a friendly name of CLUSTER. In this case, the container images will be tagged as devsecops-CLUSTER:latest
and when executed, vars will be mapped in from vars/CLUSTER/
, expecting to be their default names of common.yml
, devsecops.yml
, etc. as needed. In this configuration, if you have a local ~/.kube/config
that you have a cached login (for example, as opentlc-mgr
, you should pass the path to that file with -k ~/.kube/config
or --kubeconfig=~/.kube/config
. run-container.sh
will copy that file into the tmp/
directory in the appropriate place for your cluster, and kubeconfig
should not be changed from the DEFAULT of {{ tmp_dir }}/auth/kubeconfig
in vars/CLUSTER/common.yml
. Because run-container.sh
stages the kubeconfig in this way, the cached logins from the playbooks will not back-propogate to your local ~/.kube/config
, so follow-on execution of oc
or kubectl
on your host system will not respect any changes executed in the container without using the kubeconfig in tmp/
.
For example, let's suppose you wanted to use the friendly name rhpds
for a cluster. You create a directory in vars/
named rhpds
and copy the necessary examples into it, while renaming them.
mkdir -p vars/rhpds
cp vars/common.example.yml vars/rhpds/common.yml
cp vars/devsecops.example.yml vars/rhpds/devsecops.yml
Then edit vars/rhpds/common.yml
so that cluster_name and openshift_base_domain match the email you got from RHPDS. You will additionally have to set oc_cli
to the commented-out value of /usr/local/bin/oc
for the container workflow, if you are not provisioning. You should not change kubeconfig in common.yml for the container workflow.
You can edit the rest of common.yml
and devsecops.yml
to suit your needs, in their normally documented fashion, for playbook execution inside the container.
If you have not already, log in to your cluster locally before executing the script. To do that, and then build and execute the container image, you can run:
oc login -u opentlc-mgr -p <PASSWORD_FROM_EMAIL> <cluster_name>.<openshift_base_domain> # Make sure you use the values from your email from RHPDS, not these placeholders
oc whoami # You should see `opentlc-mgr` here
./run-container.sh -c rhpds -k ~/.kube/config devsecops # ~/.kube/config is the default location for the kubeconfig for kubectl and oc, replace if yours is different
# ^ ^ ^-- this is the name of the playbook you want to execute. It should be in playbooks/devsecops.yml in this case
# | \---- This is telling run-container.sh to copy the kubeconfig from this location into tmp before running the container image
# \--- this flag lets run-container.sh know what subfolder your vars are in, and what to name the container image
At this point, playbook execution should begin. Before attempting to work on a cluster, you may want to try running the test
playbook to ensure you get good feedback from the framework.
You can, of course, use the cluster name command line option to define multiple clusters, each with vars in their own subfolders, and execute any playbook from the project in the container. This means you could maintain vars folders for multiple clusters that you provision on the fly and provision or destroy them, as well as deploying the devsecops content on them, independently. They will continue to maintain kubeconfigs in their tmp
subdirectory, and will all map the common.yml
, provision.yml
, and devsecops.yml
, dynamically into their vars
folder inside of the container at runtime. Container images will only be rebuilt when the cache expires or a change has been made, so you can continue to make edits and tweaks on the fly while running this workflow.
Do note that the containers run, in a podman
environment, as your user - without relabelling or remapping them - but on a docker
environment they are running fully priveleged. This is more privilege than containers normally get in either environment. This is to ensure that the repository files are mappable and editable by the container process as it executes.
Additionally, if you would like to work on just one cluster using the container workflow, you can do any portion of the following the following to skip having to specify these variables or be prompted for them:
export DEVSECOPS_CLUSTER=rhpds # The cluster name for vars directory and container image name
export AWS_ACCESS_KEY_ID=<YOUR ACTUAL AWS_ACCESS_KEY_ID> # Your actual AWS_ACCESS_KEY_ID, which you would otherwise be prompted for if provisioning/destroying a cluster
export AWS_SECRET_ACCESS_KEY=<YOUR ACTUAL AWS_SECRET_ACCESS_KEY> # Your actual AWS_SECRET_ACCESS_KEY, which you would otherwise be prompted for if provisioning/destroying a cluster
./run-container.sh provision devsecops
# ^---------^----these are just playbook names, like you would normally pass to `run.sh`
For easiest operation, you should create a file at the project root named .aws
with the following content:
export AWS_ACCESS_KEY_ID=<your actual access key ID>
export AWS_SECRET_ACCESS_KEY=<your actual access key secret>
It is in .gitignore, so you won't be committing secrets if you make changes.
Open a terminal and change into the project directory. Source prep.sh
:
cd openshift-devsecops # or wherever you put the project root
. prep.sh
Copy the vars examples and edit them to match your desired environment
cp vars/common.example.yml vars/common.yml
vi vars/common.yml # Change the appropriate variables
cp vars/provision.example.yml vars/provision.yml
vi vars/provision.yml # Change the appropriate variables
Execute run.sh
with the names of the playbooks you would like run, in order.
./run.sh provision
Wait a while. Currently, in my experience, it takes about 35-45 minutes to deploy a cluster.
Open a terminal and change into the project directory. Copy the vars examples and edit them to match your desired environment.
cd openshift-devsecops # or wherever you put the project root
cp vars/common.example.yml vars/common.yml
vi vars/common.yml # Change the appropriate variables
cp vars/devsecops.example.yml vars/devsecops.yml
vi vars/devsecops.yml # Change the appropriate variables
Execute run.sh
with the names of the devsecops playbook
./run.sh devsecops
Wait a while. Currently, in my experience, it takes about 30 minutes to deploy everything by default.
Do all of the above steps for both parts at once.
cd openshift-devsecops # or wherever you put the project root
. prep.sh
cp vars/common.example.yml vars/common.yml
vi vars/common.yml # Change the appropriate variables
cp vars/provision.example.yml vars/provision.yml
vi vars/provision.yml # Change the appropriate variables
cp vars/devsecops.example.yml vars/devsecops.yml
vi vars/devsecops.yml # Change the appropriate variables
./run.sh provision devsecops
Wait a while. Currently, in my experience, it takes about an hour to deploy a cluster and everything by default.
Access the cluster via cli or web console. If this repo deployed your cluster, the oc
client is downloaded into tmp
, in a directory named after the cluster, and prep.sh
can put that into your path. The web console should be available at https://console.apps.{{ cluster_name }}.{{ openshift_base_domain }}
. If you have recently deployed a cluster, you can update kubeconfig paths and $PATH for running binaries with the following:
cd openshift-devsecops # or wherever you put the project root
. prep.sh
prep.sh is aware of multiple clusters and will let you add to PATH and KUBECONFIG on a per-cluster basis in multiple terminals if you would like.
If you deployed the cluster with this repo, when you are ready to tear the cluster down, run the following commands from the project root:
cd openshift-devsecops # or wherever you put the project root
. prep.sh
./run.sh destroy
If you are using multiple clusters or otherwise non-default vars files locations, you can specify a common.yml path (e.g. with -e @vars/my_common.yml
) to destroy a specific cluster.
There are three major playbooks implemented currently:
playbooks/provision.yml
playbooks/devescops.yml
playbooks/destroy.yml
Additionally, there are three important vars files currently:
vars/provision.yml
vars/devsecops.yml
vars/common.yml
There are a significant number of in-flux roles that are part of building the cluster and workshop content. You should explore individual roles on your own, or look at how the playbooks use them to understand their operation. The intent of the final release of this repo is that the roles will be capable of being developed/maintained independently, and they may be split into separate repositories with role depdendency, git submodules, or some combination of the two used to install them from GitHub or another SCM.
This playbook will, given access to AWS keys for an administrator account on which Route53 is managing DNS, provision an OpenShift 4.x cluster using the latest installer for the specified major.minor release. Future plans for this playbook:
This playbook will deploy all of the services to be used in the workshop. First it adjusts the cluster to be ready to accept workshop content by doing the following:
console.apps.{{ cluster_name }}.{{ openshift_base_domain }}
because console-openshift-console.apps
was deemed to be just a bit much.As a rule, it uses Operators for the provisioning/management of all services. Where an appropriate Operator was available in the default catalog sources, those were used. Where one doesn't exist, they were sourced from Red Hat GPTE published content. Also as a rule, it tries to stand up only one of each service and provision users on each service. The roles have all been designed such that they attempt to deploy sane defaults in the absence of custom variables, but there should be enough configuration available through templated variables that the roles are valuable outside of the scope of this workshop.
The services provided are currently in rapid flux and you should simply look through the listing to see what's applied. For roles to be implemented or changed in the future, please refer to GitHub Issues as these are the tracking mechanism I'm using to keep myself on track.
This playbook will, provided a common.yml, identify if openshift-install was run from this host and confirm you would like to remove this cluster. It will completely tear the cluster down, and remove everything from the temporary directory for this cluster.
There are example files that may be copied and changed for the variable files. Where deemed necessary, the variables are appropriately commented to explain where you should derive their values from, and what they will do for you.
If you do not have them named exactly as they are shown, as long as you include a vars_file that sets the -e
on the run.sh
or ansible-playbook
command line. This means you can name the files differently, and deploy multiple clusters at once. A hypothetical multi-cluster deployment workflow could be like this:
cd openshift-devsecops # or wherever you put the project root
. prep.sh
# Deploy cluster 1
cp vars/common.example.yml vars/common_cluster1.yml
vi vars/common_cluster1.yml # Change the appropriate variables
cp vars/provision.example.yml vars/provision_cluster1.yml
vi vars/provision_cluster1.yml # Change the appropriate variables
cp vars/devsecops.example.yml vars/devsecops_cluster1.yml
vi vars/devsecops_cluster1.yml # Change the appropriate variables
./run.sh provision devsecops -e @vars/common_cluster1.yml -e @vars/provision_cluster1.yml -e @vars/devsecops_cluster1.yml
# Deploy cluster 2
cp vars/common.example.yml vars/common_cluster2.yml
vi vars/common_cluster2.yml # Change the appropriate variables
cp vars/provision.example.yml vars/provision_cluster2.yml
vi vars/provision_cluster2.yml # Change the appropriate variables
cp vars/devsecops.example.yml vars/devsecops_cluster2.yml
vi vars/devsecops_cluster2.yml # Change the appropriate variables
./run.sh provision devsecops -e @vars/common_cluster2.yml -e @vars/provision_cluster2.yml -e @vars/devsecops_cluster2.yml
These variables include things that are important for both an RHPDS-deployed cluster and a cluster deployed from this project. They either define where the cluster is for connection, or they define how to deploy and later connect to the cluster. For clusters created with this project, it also indicates how to destroy the cluster.
The primary function of these variables is to provide information necessary to the provision.yml
playbook for deploymen of the cluster. Future plans for this file align with the future plans for the playbook, intended to enable more infrastrucure platforms.
This mostly contains switches to enable or disable workshop services and infrastructure. It's also used right now to control from which GitHub project the various GPTE-built operators are sourced.
I welcome pull requests and issues. I want this to become a valuable tool for Red Hatters at all levels to explore or use for their work, and to be a valuable resource for our partners. If there's something that you think I should do that I'm not, or something that's not working the way you think it was intended, please either let me know or fix it, if you're able. I would love to have help, and as long as we're communicating well via GitHub Issues about the direction that something should go, I won't turn away that help. Please, follow the overall design goals if making a pull request.