The purpose of this repo is to show several examples of Openshift and upstream Kubernetes concepts as reference examples that can be used and expanded on.
Examples Concepts demonstrated:
Optional:
If you have an Openshift cluster up, argocd
CLI installed, and are authenticated to the oc
CLI just run the installation script below. The script itself has more commented information on the steps and commands if you prefer to run through this demo manually.
./runme.sh
If you would like to run the demo without any ArgoCD components
./no_argocd_runme.sh
This will run the script off of the static files in the directories instead of deploying applications from ArgoCD
This script will:
Apache Kafka is a highly scalable and performant distributed event streaming platform great for storing, reading, and analyzing streaming data. Originally created at LinkedIn, the project was open sourced to the Apache Foundation in 2011. Kafka enables companies looking to move from traditional batch processes over to more real-time streaming use cases.
The diagram above is a common example of many fast-data (streaming) solutions today. With kafka as a core component of your architecture, multiple raw data sources can pipe data to Kafka, be analyzed in real-time by tools such as Apache Spark, and persisted or consumed by other microservices
An Operator is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling. With Operators, the kubernetes community gains a standardized way to build, deploy, operate, upgrade, and troubleshoot Kubernetes applications.
The full list of Operators can be found on operatorhub.io, the home for the Kubernetes community to share Operators.
Today we will be using the strimzi.io Kafka Operator. Strimzi makes it easy to run Apache Kafka on OpenShift or Kubernetes.
Strimzi provides three operators:
Cluster Operator Responsible for deploying and managing Apache Kafka clusters within an OpenShift or Kubernetes cluster.
Topic Operator Responsible for managing Kafka topics within a Kafka cluster running within an OpenShift or Kubernetes cluster.
User Operator Responsible for managing Kafka users within a Kafka cluster running within an OpenShift or Kubernetes cluster.
A Kubernetes Operator based on the Operator SDK for creating and managing Grafana instances.
The Operator is available on Operator Hub.
It can deploy and manage a Grafana instance on Kubernetes and OpenShift. The following features are supported:
Why Grafana?
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
Why Argo CD?
Red Hat CodeReady Workspaces is a developer workspace server and cloud IDE. Workspaces are defined as project code files and all of their dependencies neccessary to edit, build, run, and debug them. Each workspace has its own private IDE hosted within it. The IDE is accessible through a browser. The browser downloads the IDE as a single-page web application.
Red Hat CodeReady Workspaces provides:
Why CodeReady Workspaces?
By default, the demo will deploy an example IoT Temperature Sensors Demo using ArgoCD based on . This demo will deploy a consumer facing portal that collects temperature data from simulated IoT devices and processes them.
This demo creates a couple of topics. The first one named iot-temperature
is used by the device simulator for sending temperature values and by the stream application for getting such values and processing them. The second one is the iot-temperature-max
topic where the stream application puts the max temperature value processed in the specified time window that is then displayed in real-time on the consumer facing dashboard in the gauges charts as well as the log of incoming messages.
Check out the for this IoT demo for further detail
As part of the provided Grafana dashboards, you can also view more kafka-specific metrics for the IoT demo by filtering by the iot-temperature
or iot-temperature-max
topics
Here you can see metrics such as:
By default, the demo will deploy an example Strimzi loadtesting demo using ArgoCD based on . This demo will create several topics my-topic1
and my-topic2
and deploy cronJobs and Jobs to these topics. You can leverage Github and argoCD in order to increase producer load, as well as watch logs of messages from the consumers.
By default the cronJobs will have the following characteristics:
By default the Jobs will have the following characteristics:
You can visualize the dynamic job creation through the Pods/Jobs tab in the Openshift Console as well as through the Grafana Dashboards provided.
Navigate to the logs of a consumer to view incoming messages
oc logs -n myproject kafka-consumer1
oc logs -n myproject kafka-consumer2
A single kafka topic can also handle many Producers sending many different messages to it, to demonstrate this you can look at job1.yaml
and job2.yaml
$ cat job1.yaml
<...>
kafka-producer-perf-test --topic my-topic1 --num-records 2500000 --record-size 5
$ cat job2.yaml
<...>
kafka-producer-perf-test --topic my-topic1 --num-records 2500000 --record-size 10
Navigate back to the logs of kafka-consumer1
and you should see two streams of different record sizes being consumed on my-topic1
. An example output is below
$ oc logs kafka-consumer1 -n myproject
SSXVN
SSXVN
SSXVN
SSXVN
SSXVN
SSXVNJHPDQ
SSXVNJHPDQ
SSXVNJHPDQ
SSXVNJHPDQ
Navigate to the Openshift UI and demo through all of the orchestration of pods, jobs, monitoring, resource consumption, etc.
If you are using Openshift 4 you can also see additional cluster level metrics for pods, for example our kafka broker kafka-cluster-0
As part of the provided Grafana dashboards, you can also view more kafka-specific metrics for the strimzi-loadtest demo by filtering by the my-topic1
or my-topic2
topics
Navigate back to the Grafana UI to see Kafka/Zookeeper specific metrics collected by Prometheus and how the Jobs that we deployed in our demo can be visualized in real-time. Select and filter the topic to in order to see specific metrics for the strimzi-loadtest demo
Here you can see metrics such as:
By default, this demo will deploy Openshift Codeready Workspaces as well as a pre-configured workspace with all of the repositories from this demo to work on. You can also connect the IDE to your own github account so that if you make any changes to the repos you can make push/pull requests to the repo. This is a very powerful feature because it allows companies to remove the need to develop on a local machine first. This opens up many opportunities for efficiency and productivity increases because development is created and tested on the same platform that it is run on. It also adds a layer of security for large organizations that want to protect their IP.
The first step will be to register a new user, fill in the form with any information that you desire and login to the user that you create.
Once you create your user, you should see a workspace automatically being created for you. This is a feature of Codeready Workspaces that helps to make a workspace portable by using a devfile. In our case we are using the /f?url=
API in order to create a workspace from a publicly accessible standalone devfile
The location of the devfile is in this public github repository.
If you take a look at the runme.sh
script you can see how this workspace is instantiated:
CHE_HOST=$(oc get routes -n ${CODEREADY_NAMESPACE} | grep codeready-codeready | awk '{ print $2 }')
open http://${CHE_HOST}/f?url=${CODEREADY_DEVFILE_URL}
The Codeready workspace provided has a full-featured CLI integrated IDE that can be used to continue on with your demonstration. First it will be important to login, this can be done by providing the oc login
command that can be found in the link at the top right of the main Openshift Dashboard
The oc login command will look similar to below
oc login --token=vekO8irE5sCkFKdHfMPW4eDcD40200S7t9aCopEGQfw --server=https://api.strimzi-demo-cluster.redhat.com:6443
Once logged in you can install/uninstall/re-run all components of this demo as if you were using your own local machine. For example, you can remove the re-create the iot-demo app
$ oc delete -f argocd/iot-demo.yaml
application.argoproj.io "iot-demo" deleted
$ oc create -f argocd/iot-demo.yaml
application.argoproj.io/iot-demo created
Check back to your argoCD UI or the Openshift UI in order to see deployment changes related to the iot-demo app
See the CodeReady Workspaces 2.0 End User Guide for more official documentation on what you can do with CodeReady workspaces
To login to the ArgoCD console and navigate to the iot-demo application
username: admin
password: secret
Here you should see the existing argocd applications: the IoT demo application as well as the strimzi-loadtest applications
If you click select the application you should see the topology of the application and more details
By default, the repo is set up to deploy the demo app based off of these repos below
If you want to demonstrate and control git push to drive continuous delivery, fork this repository and re-direct to your own personal github. An example of doing so with the iot-demo app is below, but you can fork any of the repositories above if you want to demonstrate CD with that component. Only one fork is needed to effectively show Continuous Delivery in action.
First uninstall the existing iot-demo app deployment:
oc delete -f argocd/iot-demo.yaml -n myproject
Next, add your repo using the argoCD CLI
argocd repo add <GITHUB_REPO_URL_HERE>
Then set the repoURL
variable in the argocd/iot-demo.yaml
manifest to point at your own github URL before re-deploying the demo application
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: iot-demo
namespace: argocd
spec:
project: default
source:
repoURL: <YOUR_GITHUB_REPO_URL_HERE>
Redeploy the application you've modified to argoCD
oc create -f argocd/iot-demo.yaml -n myproject
Now you can make corresponding changes to the IoT github repo, such as increasing replicas of the device-app.yml
from 30 to 50
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: device-app
labels:
app: iot-demo
spec:
replicas: 30
<...>
Push your changes to Github and within minutes you should automatically see the desired changes reflected in your deployments
$ oc get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
device-app 50/50 50 50 12m
You will also see the number of devices reflected in the IoT consumer app dashboard
List all kafka topics
oc get kafkatopic
To scale your Kafka cluster up, add a broker using the commmand below and modify the replicas:3 --> 4
for kafka brokers
Note: Command below only if you are deploying without argoCD. If using argoCD, use git as your source.
oc edit -f strimzi-operator/deploy/crs/kafka-cluster-3broker.yaml -n myproject
Check out the for strimzi for additional documentation
Should you need to login to Grafana, use the credentials root/secret
List Grafana dashboards
oc get grafanadashboards
List Grafana datasources
oc get grafanadatasources
Check out the for integr8ly for additional documentation
./uninstall.sh
Note: If the uninstall hangs, just re-run the script