halkyonio / primaza-poc

Quarkus Primaza Application - POC
1 stars 7 forks source link
discovery-service kubernetes service-binding vault

Table of Contents

Introduction

Modern runtime applications designed to be Cloud Native must also been able to connect to backend systems (SQL, NoSQL, Broker, etc) as this is the case for applications running on physical or virtual machines.

To access a system, it is needed to configure different resources (e.g. Datasource) and to declare different parameters (e.g jdbc url, username, password, database name) using by example for Spring Boot or Quarkus runtimes such a configuration config file: application.properties able to configure the connection with a SQL database.

To avoid to hard code such parameters, we use mostly a kubernetes secret and pass the needed information as a list of key/value pairs.

type: postgresql
provider: bitnami
database: fruits_database
uri: postgresql:5432
username: healthy
password: healthy

This approach is subject to different problems as:

Primaza initiative

Primaza name comes from the Greek word πρυμάτσα, which represent a stern line used to tie boats to the dock.

Primaza aims to support the discoverability, life cycle management & connection of services running in Kubernetes (or outside) with a runtime applications.

Primaza introduces new concepts supporting such a vision:

How it works

To bind a service to a runtime, it is needed to create a Claim CR. This claim contains the name of the service, its version and optional some additional parameters like: role, target environment.

When the controller detects such a new Claim CR, then it will populate a request and will call the Primaza Claim REST endpoint.

According to the information received, Primaza will check if a match exists between a claim and a service registered and will discover it on the target cluster (dev test, etc).

To determine if a kubernetes service is available, Primaza will use the service endpoint definition and more specifically the protocol:port of the service to be watched.

Next, the credential associated with the service will be retrieved locally or using a secret store and a secret will be created containing as information:

type     // Service Type: postgresql, mysql
provider // Service provider: bitnami, etc
uri      // kubernetes DNS name and port
username // user to be used to access the service
password // Password to be used to access the service
database // (optional): Database name to connect to a SQL database

The secret is created under the namespace of the application and a volume is created, part of the application to mount the secret using the workload projection convention.

How to play/demo

To use primaza, it is needed to perform a couple of things like:

Running the application locally

You can run the quarkus primaza application in dev mode using the command:

cd app
./mvnw compile quarkus:dev

The command will launch the runtime at the following address: http://localhost:8080 but will also run different containers: database (h2) & vault secret engine if your docker or podman daemon is running locally !

You can discover the quarkus dev services and injected config by pressing on the key c within your terminal.

Next follow then the instructions of the Demo time section :-)

Using Primaza on a k8s cluster

In order to use Primaza on kubernetes, it is needed first to setup a cluster (kind, minikube, etc) and to install an ingress controller. You can use the following script able to install using kind a kubernetes cluster locally:

curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind.sh" | bash -s install

Remark: To see all the options proposed by the script, use the command curl -s -L "https://raw.githubusercontent.com/snowdrop/k8s-infra/main/kind/kind.sh" | bash -s -h

If the cluster is up and running, install vault using the following script ./scripts/vault.sh. We recommend to use this script as it is needed to perform different steps post vault installation such as:

Note: If creation of the vault's pod is taking more than 60s as the container image must be downloaded, then the process will stop. In this case, remove the helm chart ./scripts/vault.sh remove and repeat the operation.

Tip: Notice the messages displayed within the terminal as they told you how to get the root token and where they are stored, where to access the keys, etc !

We can now install Crossplane and its Helm provider

./scripts/crossplane.sh

Tip: Script usage is available using the -h parameter

Create the primaza namespace

kubectl create namespace primaza

Set the VAULT_URL variable to let primaza to access storage engine using the Kubernetes DNS service name:

export VAULT_URL=http://vault-internal.vault:8200

Next, deploy Primaza and its Postgresql DB using the following helm chart

helm install \
  --devel \
  --repo https://halkyonio.github.io/helm-charts \
  primaza-app \
  primaza-app \
  -n primaza \
  --set app.image=<CONTAINER_REGISTRY>/<ORG>/primaza-app:latest \
  --set app.host=primaza.${VM_IP}.nip.io \
  --set app.envs.vault.url=${VAULT_URL}

Tip: When the pod is started, you can access Primaza using its ingress host url: http://primaza.<VM_IP>.nip.io

If you prefer to install everything all-in-one, use our bash scripts on a kind k8s cluster:

VM_IP=<VM_IP>
export VAULT_URL=http://vault-internal.vault:8200
export PRIMAZA_IMAGE_NAME=kind-registry:5000/local/primaza-app
$(pwd)/scripts/vault.sh
$(pwd)/scripts/crossplane.sh
$(pwd)/scripts/primaza.sh build
$(pwd)/scripts/primaza.sh localdeploy

Note: If you prefer to use the helm chart pushed on Halkyon repository, don't use the parameters build and localdeploy

And now, you can demo it ;-)

Demo time

To play with Primaza, you can use the following scenario:

Everything is in place to claim a Service using the following commands:

Use cases

This section describes different use cases that you can play manually top of a k8s cluster where:

Service discovered

helm uninstall postgresql -n db kubectl delete pvc -lapp.kubernetes.io/name=$RELEASE_NAME -n db

helm install $RELEASE_NAME bitnami/postgresql \ --version $VERSION \ --set auth.username=$DB_USERNAME \ --set auth.password=$DB_PASSWORD \ --set auth.database=$DB_DATABASE \ --create-namespace \ -n db


- Open the primaza `applications` screen and next to the line of the `fruits` application, click on the claim button
- Create a new claim
- Wait a few moments till the status is `bound` and open the ingress URL

### Service deployed using Crossplane

- Open the primaza `services catalog` screen and click on the `installable` checkbox of the service `postgresql`
- If not yet done, specify the Helm repo `https://charts.bitnami.com/bitnami`, chart name `postgresql` and version `11.9.13` to be deployed
- Next, open the primaza `applications` screen and next to the line of the `fruits` application, click on the claim button
- Create a new claim
- Wait a few moments till the status is `bound` and open the ingress URL