alpha-hack-program / doc-bot

1 stars 1 forks source link

Doc Bot Hacking Sprint

Title: Doc Bot
Goal: Deploy RHOAI LLM tutorial and learn the inner workings of the system.
Output: Being able to feed and deploy a RAG based chatbot or at least the backend/API of it.
Timing: 2 to 3h
Notes: We'll start with the RHOAI Insurance Claim RHPDS demo.

Starting point of this hacking sprint

Chat with your documentation lab from AI on OpenShift.

Objective

This repository exemplyfies the RAG pattern as a simple chatbot that can answer questions related to specific dossiers.

If you follow the installation instructions you should be able to deploy a full RAG system that can chunk and digest PDF documents related to dossiers, store them as vectors in Milvus and answer questions related to those documents (by dossier id) using an LLM and the context extracted runing the question through the vector database.

Components

Installation

Preparation

If you need to delete a previous Red Hat OpenShift AI installation go here

Install the following operators prior to installing the OpenShift AI operator:

Installing the Red Hat OpenShift AI operator

If you want to use stable versions use stable.

cat << EOF| oc create -f -
---
apiVersion: v1
kind: Namespace
metadata:
  name: redhat-ods-operator
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: rhods-operator
  namespace: redhat-ods-operator
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: rhods-operator
  namespace: redhat-ods-operator 
spec:
  name: rhods-operator
  channel: stable-2.10
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  installPlanApproval: Manual
EOF

If, on the other hand, you want to use the latest versions use fast.

cat << EOF| oc create -f -
---
apiVersion: v1
kind: Namespace
metadata:
  name: redhat-ods-operator
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: rhods-operator
  namespace: redhat-ods-operator
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: rhods-operator
  namespace: redhat-ods-operator 
spec:
  name: rhods-operator
  channel: fast
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  installPlanApproval: Manual
  startingCSV: rhods-operator.2.12.0
EOF

The previous step creates a manual subscription that generates an InstallPlan that has to be approved.

CAUTION: There should be only one install plan! If there are two delete one of them...

# List all install plans in namespace redhat-ods-operator then delete all but the first one
for installplan in $(oc get installplans -n redhat-ods-operator -o name); do
  # Check if the `INSTALL_PLAN_TO_PATCH` environment variable is empty.
  if [ -z "$INSTALL_PLAN_TO_PATCH" ]; then
    INSTALL_PLAN_TO_PATCH=$installplan
    oc patch $installplan -n redhat-ods-operator --type merge --patch '{"spec":{"approved":true}}'
    continue
  fi
  echo "Deleting $installplan"
  oc delete $installplan -n redhat-ods-operator
done

Check if the plan was approved

echo "Install plan is approved? $(oc get installplans -n redhat-ods-operator -o jsonpath='{.items[].spec.approved}')"

Create the Data Science Cluster

The Data Science Cluster object or DSC instructs the RHOAI operator how to install its compoments. In this case we will leave the default configuration for all the compoments except for Kserve this is because the default configuration uses a self signed certificate and we don't want that for our deployment.

There are three ways you can configure the certificate for Kserve:

Using the OpenShift routes certificate

Use this command to create the DSC and use the same certificate your OpenShift cluster is using.

cat << EOF| oc create -f -
---
kind: DataScienceCluster
apiVersion: datasciencecluster.opendatahub.io/v1
metadata:
  name: default-dsc
  labels:
    app.kubernetes.io/name: datasciencecluster
    app.kubernetes.io/instance: default-dsc
    app.kubernetes.io/part-of: rhods-operator
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: rhods-operator
spec:
  components:
    codeflare:
      managementState: Managed
    dashboard:
      managementState: Managed
    datasciencepipelines:
      managementState: Managed
    kserve:
      managementState: Managed
      serving:
        ingressGateway:
          certificate:
            type: OpenshiftDefaultIngress
        managementState: Managed
        name: knative-serving
    modelmeshserving:
      managementState: Managed
    kueue:
      managementState: Managed
    ray:
      managementState: Managed
    workbenches:
      managementState: Managed
EOF

Using another certificate

In this case and for the sake of simplicity we are going to use the same certificate OpenShift is using but manually.

It may sound silly to use the same certificate you get with the previous step but exemplifies the procedure to use whichever certificate you may need.

Intructions extracted from: https://ai-on-openshift.io/odh-rhoai/single-stack-serving-certificate/

Get the name of the secret used for routing tasks in your OpenShift cluster:

export INGRESS_SECRET_NAME=$(oc get ingresscontroller default -n openshift-ingress-operator -o json | jq -r .spec.defaultCertificate.name)

oc get secret ${INGRESS_SECRET_NAME} -n openshift-ingress -o yaml > rhods-internal-primary-cert-bundle-secret.yaml

Clean rhods-internal-primary-cert-bundle-secret.yaml, change name to rhods-internal-primary-cert-bundle-secret and type to kubernetes.io/tls, it should look like:

kind: Secret
apiVersion: v1
metadata:
name: rhods-internal-primary-cert-bundle-secret
data:
tls.crt: >-
    LS0tLS1CRUd...
tls.key: >-
    LS0tLS1CRUd...
type: kubernetes.io/tls

Create the secret in istio-system:

oc apply -n istio-system -f rhods-internal-primary-cert-bundle-secret.yaml

Check the secret is in place:

oc get secret rhods-internal-primary-cert-bundle-secret -n istio-system

Finally create the DSC object using the secret you just created.

cat << EOF| oc create -f -
---
kind: DataScienceCluster
apiVersion: datasciencecluster.opendatahub.io/v1
metadata:
  name: default-dsc
  labels:
    app.kubernetes.io/name: datasciencecluster
    app.kubernetes.io/instance: default-dsc
    app.kubernetes.io/part-of: rhods-operator
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: rhods-operator
spec:
  components:
    codeflare:
      managementState: Managed
    dashboard:
      managementState: Managed
    datasciencepipelines:
      managementState: Managed
    kserve:
      managementState: Managed
      serving:
        ingressGateway:
          certificate:
            # Intructions from: https://ai-on-openshift.io/odh-rhoai/single-stack-serving-certificate/
            # You have to copy the secret create it in istio-system namespace
            secretName: rhods-internal-primary-cert-bundle-secret
            type: Provided
        managementState: Managed
        name: knative-serving
    modelmeshserving:
      managementState: Managed
    kueue:
      managementState: Managed
    ray:
      managementState: Managed
    workbenches:
      managementState: Managed
EOF

Check is the default-dsc cluster is available.

oc get dsc default-dsc -o jsonpath='{.status.conditions[?(@.type=="Available")].status}' && echo

Doc Bot Application deployment

Fork this reposiroty and clone it

Why forking? Because this demonstration tries to keep (to a certain extend) close to a real situation where repositories are private or password protected.

Clone your repo and change dir to the clone.

git clone <REPO_URL>

Set some basic environment variables

Create a .env file in bootstrap:

This variables will be used by some scripts later.

GIT_BASE and REPO_URLshould point to your clone repo and base url.

GIT_BASE="https://github.com"
REPO_URL="https://github.com/alpha-hack-program/doc-bot.git"
MILVUS_NAMESPACE="milvus"
DATA_SCIENCE_PROJECT_NAMESPACE="doc-bot"

Create a PAT for your forked repo

You need this ONLY if your repo is private

You will have to provide a username and PAT to the script create-secrets.sh located at bootstrap.

Create a PAT and save it for later.

Hugging Face prework

This demonstration will automatically download the LLM model files from Hugging Face (Mistral 7B by default) in order to do so you have to do sign up and generate a token and create a secret that will be used by a Job.

In order to simplify the creation of the secrete create a .hf-creds file in bootstrap.

HF_USERNAME=<USERNAME>
HF_TOKEN=hf_...

Deploy Milvus and the Doc Bot Application using ArgoCD

Change dir to bootstrap:

cd bootstrap

Run the deploy.sh script from bootstrap folder:

./deploy.sh

Create the git credentials secret

Run create-secrets.sh and be ready to input the username and PAT when asked for:

./create-secrets.sh

Create the Hugging Face secret

Run this script that uses the .hf-creds file you should have created in bootstrap:

From bootstrap run:

./hf-creds.sh