gabrielkoo / chatgpt-faq-slack-bot

A user-trainable Knowledge Base / FAQ Slack Bot on AWS SAM based on ChatGPT and Embeddings.
https://dev.to/aws-builders/enhance-your-slack-workspace-with-a-user-trainable-chatgpt-integrated-faq-bot-2pj3
MIT License
19 stars 4 forks source link
aws aws-lambda chatbot chatgpt embedding faq kbqa knowledge-base openai qna question-answering retrieval-augmented-generation sam serverless slack slack-bot slackbot

chatgpt-faq-slack-bot

UPDATE [2024-07-17]: I have created another repo with similar functionality but with all AWS services only. Though it requires more experience on e.g. AWS VPC and Amazon Aurora's query editor, but it should be more robust and scalable than the solution in this repo. Do take a look! gabrielkoo/self-learning-rag-it-support-slackbot.


The bot uses ChatGPT to answer based on your own FAQ database, while allowing users to submit new articles into it with a Slash Command, so that it can answer with new knowledge immediately, as it updates the model on the fly in the cloud!

Read my dev.to article below to know more about why and how I created this solution!

I have also included a pricing estimate on the cost breakdown of using this solution (it's at US$0.009 per question as of Apr 2023 pricings).

https://dev.to/aws-builders/enhance-your-slack-workspace-with-a-user-trainable-chatgpt-integrated-faq-bot-2pj3

Example

A sample dataset is included in the ./sample_data directory, and it's built based on Wikipedia pages on the Disney+ series "The Mandalorian".

So it does know who is Grogu: Who is Grogu

But doesn't know who I am: Who is Gabriel Koo

So I can submit a new article to the bot: Submit a new article

And now the bot knows how to answer my question: Who is Gabriel Koo

Background

Since ChatGPT's API became available in 2023 Mar, the world has been of great hype on building a lot of great integrations around it. Two of these integrations are especially appealing to me:

  1. Combing Embedding Search with ChatGPT to build a FAQ engine - it's a way of: Knowledge Base Question Answering (KBQA) - combining

    • natural language understanding (via a text embedding on the question)
    • information retrieval (via a text embedding on the articles, matches against the one for the question)
    • knowledge representation (via ChatGPT with the selected information)
  2. Connecting the AI with a programmable messaging platform like Slack

But so far, I have not seen any open-source project that:

  1. combines the two together
  2. provides a easy hosting method like AWS SAM, and lastly
  3. provides a functionality to let the user submit extra knowledge into the embedding dataset.

The 3rd point is very important to me, because in this post-OpenAI era, you should no longer rely on an expensive data scientist to build a FAQ engine for you. Instead, you should let your users submit their own knowledge into the dataset, so that the AI can learn from the collective intelligence of your users.

So I decided to build one myself.

Architecture and Infrastructure

The infrastructure is built with AWS SAM, and it consists of the following components:

Yeah that's it! With AWS SAM, things are simply so simple, and all these are defined in template.yml.

Architecture Diagram

Sequence diagram for the Q&A flow:

sequenceDiagram
    participant User
    participant Slack
    box gray AWS SAM
        participant Lambda
        participant S3Bucket
    end
    participant OpenAI

    User->>Slack: Asks a question
    Slack->>Lambda: POST request with question
    Lambda->>S3Bucket: Fetch FAQ datafile and text embeddings
    S3Bucket->>Lambda: Returns data files
    Lambda->>OpenAI: 1) Create a text embedding of the question
    OpenAI->>Lambda: Returns text embedding of the question
    Lambda->>Lambda: 2) Match embeddings and find relevant FAQ articles
    Lambda->>OpenAI: 3) Feed question and relevant articles to ChatGPT
    OpenAI->>Lambda: Returns response
    Lambda->>Slack: Returns answer based on FAQ dataset
    Slack->>User: Replies with the answer

Sequence diagram for the new training article submission flow:

sequenceDiagram
    participant User
    participant Slack
    box gray AWS SAM
        participant Lambda
        participant S3Bucket
    end
    participant OpenAI

    User->>Slack: /submit_train_article command
    Slack->>Lambda: POST request with open modal request
    Lambda->>Slack: Returns modal configuration
    Slack->>User: Shows modal with form fields
    User->>Slack: Fills in the form fields of the new article
    Slack->>Lambda: POST request with the article
    Lambda->>S3Bucket: Fetch FAQ datafile and text embeddings
    S3Bucket->>Lambda: Returns data files
    Lambda->>OpenAI: Compute text embedding for new article
    OpenAI->>Lambda: Returns text embedding for new article
    Lambda->>S3Bucket: Update FAQ CSV file and embeddings file
    S3Bucket->>Lambda: Confirm update
    Lambda->>Slack: Returns success message
    Slack->>User: Replies with success message

Pre-requisites

Create a Slack App

Create a Slack App

The following scopes are required (Configure in the "OAuth & Permissions" page > "Scopes" > "Bot Token Scopes"):

The following event subscriptions are required: (but you can't set these yet until the deployment of the AWS SAM infrastructure is done):

Enable "Allow users to send commands and messages from the messages tab” in the “App Home” settings. Enable "Allow users to send commands and messages from the messages tab” in the “App Home” settings

Lastly, make sure to install the app to your workspace Install the app to your workspace

Prepare the following environment varaibles into the .env file

Build and deploy

  1. Setup your shell for AWS credentials. There are various ways of doing so, and you may refer to this documnetation.

    For example, you may run aws sso login --profile name-of-your-profile if you have configured your AWS credentials with AWS Identity Center (originally named AWS SSO) before.

  2. Run the ./deploy.sh script, it will provision everything for you.

After the deployment, you still need to manually upload the initial datafiles.

Prepare the datafiles

That's it!

If you want to be a bit lazy and start with my sample data, just run the following command instead

aws s3 cp --recursive ./sample_data/*.csv s3://$DATAFILE_S3_BUCKET/data/

Making Everything End-to-End

  1. Go to the Outputs tab of the deployed CloudFormation template (e.g. https://us-east-1.console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks), copy the URL value of FunctionUrlEndpoint.
  2. Go back to the config page of your custom Slack App, and paste it at

    "Event Subscriptions" > "Enable Events" > "Request URL" and verify it.

  3. Once done, you can go to Slack and try messaging your bot with a question that is supposed to be answerable with the help of your own FAQ dataset!

Setting up the /submit_train_article command

In addition you can also create a /submit_train_article slack command so that your users can self-serve submit extra articles into the dataset. The handlers are defined in the following methods of lambda_function.py: handle_submit_train_article_command and handle_submit_train_article_submission.

Setup

TODO

Thanks

This project is based on the following projects: