tmforum-oda / oda-canvas

Apache License 2.0
19 stars 53 forks source link

Auto deployment of Canvas on k8s cluster and Run the ctk tests #236

Open ajayaggarwal03 opened 6 months ago

ajayaggarwal03 commented 6 months ago

Goal is to achieve auto deployment of Canvas on a k8s cluster and run the ctk test over the modified canvas.

Below are the points which should be achieved:

  1. As soon as there is a pull request towards oda-canvas repo (ex. adding of new feature/operator modification/BDD tests), canvas should be deployed on a k8s cluster in a namespace.
  2. Once the canvas is installed, BDD test should be triggered on the installed canvas.
  3. The cucumber report should be uploaded to the cucumber portal.
  4. Badges should be added to the oda-canvas repo as well so it help in approving the pull request.

Please add or modify if any other step needs to be added above.

ferenc-hechler commented 5 months ago

You can run the Canvas tests inside the Kubernetes cluster, where the canvas is deployed. We are doing it similar with the code-server image deployed in the cluster.

Here is a Dockerfile which can be placed in the "feature-definition-and-test-kit/" folder

FROM node:22.2-alpine

ARG KCVERSION=v1.27.11
ARG KCSHA256=7ae327978a1edb43700070c86f5fd77215792c6b58a7ea70192647e0da848e29

ARG HELMVERSION=v3.14.2
ARG HELMSHA256=0885a501d586c1e949e9b113bf3fb3290b0bbf74db9444a1d8c2723a143006a5

# ----- INSTALL packages "curl"
RUN apk update \
    && apk add --no-cache curl

# ----- INSTALL "kubectl" ----- (see https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/ )
RUN curl -Lo /usr/local/bin/kubectl https://dl.k8s.io/release/$KCVERSION/bin/linux/amd64/kubectl \
    && echo "$KCSHA256  /usr/local/bin/kubectl" | sha256sum -c \
    && chmod a+x /usr/local/bin/kubectl

# ----- INSTALL "helm" ----- (see https://github.com/helm/helm/releases )
RUN cd /tmp \
    && curl -Lo helm.tar.gz https://get.helm.sh/helm-${HELMVERSION}-linux-amd64.tar.gz \
    && echo "$HELMSHA256  helm.tar.gz" | sha256sum -c \
    && tar xvzf helm.tar.gz \
    && cp linux-amd64/helm /usr/local/bin/ \
    && chmod a+x /usr/local/bin/helm \
    && rm -rf /tmp/linux-amd64 \
    && rm /tmp/* \
    && cd /

# ----- COPY sources -----
WORKDIR /feature-definition-and-test-kit
COPY . .

# ----- INSTALL "npm packages" -----
RUN cd /feature-definition-and-test-kit \
    && cd identity-manager-utils-keycloak \
    && npm install \
    && cd ../package-manager-utils-helm \
    && npm install \
    && cd ../resource-inventory-utils-kubernetes \
    && npm install \
    && cd .. \
    && npm install

ENV KEYCLOAK_USER=admin 
ENV KEYCLOAK_PASSWORD=adpass 
ENV KEYCLOAK_BASE_URL=http://canvas-keycloak.canvas.svc.cluster.local:8083/auth/ 
ENV KEYCLOAK_REALM=myrealm

CMD npm start

We have created an docker image from our fork and uploaded it to the docker registry:

mtr.devops.telekom.de/magenta_canvas/public:canvas-ctk

To run the tests the following script can be used:

run-incluster-canvas-tests.sh

#!/bin/sh

# create own namespace for running the tests
kubectl create ns canvas-tests --dry-run=client -oyaml | kubectl apply -f -

# create serviceaccount which will be granted cluster-admin permissions
kubectl create serviceaccount -n canvas-tests sa-canvas-tests --dry-run=client -oyaml | kubectl apply -f -
kubectl create clusterrolebinding canvas-tests-cluster-admin-rb --clusterrole=cluster-admin --serviceaccount=canvas-tests:sa-canvas-tests --dry-run=client -oyaml | kubectl apply -f -

# remove old testrun
kubectl delete pod --ignore-not-found=true -n canvas-tests canvas-tests

# run tests, option -it waits until the POD is finished
kubectl run -it -n canvas-tests canvas-tests --image=mtr.devops.telekom.de/magenta_canvas/public:canvas-ctk --overrides='{"spec":{"serviceAccount":"sa-canvas-tests"}}' --restart=Never

# look, how many passed tests there have been
kubectl logs -n canvas-tests canvas-tests | grep "passed"

We have tested this in our cluster ihc-dt and the tests are all passed, except for the Undefined ones:

image

...

image

If all tests were successfull (no Undefined), I think there would be no error message after the kubectl run. So just testing, if all tests are passed, could be reached by checking the return code or just fail the shell script with set -e

ajayaggarwal03 commented 3 months ago

Hi @brian-burton ,

Currently we are able to achieve the goal of running the canvas tests through Github actions by using a kind cluster, and we have 2 ways to trigger the tests against the canvas:

a. Run the tests from the github runner towards the kind cluster. b. Run the tests from inside the pod after creating a docker image.(We would need a public container registry in this case so the image can be created and used inside the pod.)

Please suggest which would be the best practice to proceed further?

Thanks, Ajay Aggarwal

brian-burton commented 3 months ago

Hi @ajayaggarwal03,

As per our call, I think we should create a new issue for option 2 to enable enterprises to test the canvas with a defined security perimeter inside a cluster, but for our purposes in the Innovation Hub and for anyone who forks the repo and and runs the tests option 1 is fine. As the runner and the kind environment are all running inside GitHub, that's the security perimeter for our testing, so I don't perceive any significant risk that would require the extra complexity of the container option here.

anshulkumar-tmf commented 2 weeks ago

As per planning meeting, Required for Launch label to be removed