alan-turing-institute / reginald

Reginald repository for REG Hack Week 23
3 stars 0 forks source link

Reginald

The Reginald project consists of:

├── azure
│   └── scripts to setup Reginald infrastructure on Azure
├── data
│   └── directory to store llama-index data indexes and other public Turing data
├── docker
│   └── scripts for building a Docker images for both Reginald app and Slack-bot only app
├── notebooks
│   └── data processing notebooks
│   └── development notebooks for llama-index Reginald models
└── reginald
    └── models: scripts for setting up query and chat engines
    └── slack_bot: scripts for setting up Slack bot
    └── scripts for setting up end to end Slack bot with query engine

Slack bot

This is a simple Slack bot written in Python that listens for direct messages and @mentions in any channel it is in and responds with a message and an emoji. The bot uses web sockets for communication. How the bot responds to messages is determined by the response engine that is set up - see the models README for more details of the models available. The main models we use are:

Prerequisites

This project uses Poetry for dependency management. Make sure you have Poetry installed on your machine.

Install the project dependencies:

poetry install --all-extras

If you only want to run a subset of the available packages then use:

Without installing extras, you will have the packages required in order to run the full Reginald model on your machine.

Install the pre-commit hooks

pre-commit install

Obtaining Slack tokens

To set up the Slack bot, you must set Slack bot environment variables. To obtain them from Slack, follow the steps below:

  1. Set up the bot in Slack: Socket Mode Client.

  2. To connect to Slack, the bot requires an app token and a bot token. Put these into into a .env file:

    echo "SLACK_BOT_TOKEN='your-bot-user-oauth-access-token'" >> .env
    echo "SLACK_APP_TOKEN='your-app-level-token'" >> .env
  3. Activate the virtual environment:

    poetry shell

GitHub access tokens

We are currently using llama-hub GitHub readers for creating our data indexes and pulling from relevant repos for issues and files. As a prerequisite, you will need to generate a "classic" personal access token with the repo and read:org scopes - see here for instructions for creating and obtaining your personal access token.

Once, you do this, simply add this to your .env file:

echo "GITHUB_TOKEN='your-github-personal-access-token'" >> .env

running Reginald locally (without Slack)

It is possible to run the Reginald model locally and interact with it completely through the command line via the reginald chat CLI - note that this is a wrapper around the reginald.run.run_chat_interact function. To see CLI arguments:

reginald chat --help

For example with using the llama-index-llama-cpp model running Llama-2-7b-Chat (quantised to 4bit), you can run:

reginald chat \
  --model llama-index-llama-cpp \
  --model-name https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_K_M.gguf \
  --mode chat \
  --data-dir data/ \
  --which-index handbook \
  --n-gpu-layers 2

For an example with using the llama-index-ollama model running Llama3, you can run:

reginald chat \
  --model llama-index-ollama \
  --model-name llama3 \
  --mode chat \
  --data-dir data/ \
  --which-index handbook

where you have set the OLLAMA_API_ENDPOINT environment variable to the endpoint of the OLLAMA API.

For examples of running each of our different models, see the models README.

The reginald run_all CLI takes in several arguments such as:

There are some CLI arguments specific to only the llama-index models:

There are some CLI arguments specific to only the llama-index-llama-cpp and llama-index-hf models:

There are some CLI arguments specific to only the llama-index-llama-cpp model:

There are some CLI arguments specific to only the llama-index-hf model:

Note: specifying CLI arguments will override any environment variables set.

Running the Reginald bot locally with Slack

In order to run the full Reginald app locally (i.e. setting up the full response engine along with the Slack bot), you can follow the steps below:

  1. Set environment variables (for more details on environtment variables, see the environment variables README):

    source .env
  2. Run the bot using reginald run_all - note that this is a wrapper around the reginald.run.run_full_pipeline function. To see CLI arguments:

    reginald run_all --help

For example, to set up a llama-index-llama-cpp chat engine model running Llama-2-7b-Chat (quantised to 4bit), you can run:

reginald run_all \
  --model llama-index-llama-cpp \
  --model-name https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_K_M.gguf \
  --mode chat \
  --data-dir data/ \
  --which-index handbook \
  --n-gpu-layers 2

The bot will now listen for @mentions in the channels it's added to and respond with a simple message.

Running the response engine and Slack bot separately

There are some cases where you'd want to run the response engine and Slack bot separately. For instance, with the llama-index-llama-cpp and llama-index-hf models, you are hosting your own LLM which you might want to host on a machine with GPUs. The Slack bot can then be run on a separate (more cost-efficient) machine. Doing this allows you to change the model or machine running the model without having to change the Slack bot.

To do this, you can follow the steps below:

Running the bot in Docker

For full details of Docker setup, see the Docker README.

Running the bot in Azure

  1. Go to the azure directory

  2. Ensure that you have installed Pulumi and the Azure CLI

  3. Setup the Pulumi backend and deploy

./setup.sh && AZURE_KEYVAULT_AUTH_VIA_CLI=true pulumi up -y