UPDATE [2024-07-17]: I have created another repo with similar functionality but with all AWS services only. Though it requires more experience on e.g. AWS VPC and Amazon Aurora's query editor, but it should be more robust and scalable than the solution in this repo. Do take a look! gabrielkoo/self-learning-rag-it-support-slackbot.
The bot uses ChatGPT to answer based on your own FAQ database, while allowing users to submit new articles into it with a Slash Command, so that it can answer with new knowledge immediately, as it updates the model on the fly in the cloud!
Read my dev.to article below to know more about why and how I created this solution!
I have also included a pricing estimate on the cost breakdown of using this solution (it's at US$0.009 per question as of Apr 2023 pricings).
A sample dataset is included in the ./sample_data
directory, and it's built based on Wikipedia pages on the Disney+ series "The Mandalorian".
So it does know who is Grogu:
But doesn't know who I am:
So I can submit a new article to the bot:
And now the bot knows how to answer my question:
Since ChatGPT's API became available in 2023 Mar, the world has been of great hype on building a lot of great integrations around it. Two of these integrations are especially appealing to me:
Combing Embedding Search with ChatGPT to build a FAQ engine - it's a way of: Knowledge Base Question Answering (KBQA) - combining
Connecting the AI with a programmable messaging platform like Slack
But so far, I have not seen any open-source project that:
The 3rd point is very important to me, because in this post-OpenAI era, you should no longer rely on an expensive data scientist to build a FAQ engine for you. Instead, you should let your users submit their own knowledge into the dataset, so that the AI can learn from the collective intelligence of your users.
So I decided to build one myself.
The infrastructure is built with AWS SAM, and it consists of the following components:
Yeah that's it! With AWS SAM, things are simply so simple, and all these are defined in template.yml
.
Sequence diagram for the Q&A flow:
sequenceDiagram
participant User
participant Slack
box gray AWS SAM
participant Lambda
participant S3Bucket
end
participant OpenAI
User->>Slack: Asks a question
Slack->>Lambda: POST request with question
Lambda->>S3Bucket: Fetch FAQ datafile and text embeddings
S3Bucket->>Lambda: Returns data files
Lambda->>OpenAI: 1) Create a text embedding of the question
OpenAI->>Lambda: Returns text embedding of the question
Lambda->>Lambda: 2) Match embeddings and find relevant FAQ articles
Lambda->>OpenAI: 3) Feed question and relevant articles to ChatGPT
OpenAI->>Lambda: Returns response
Lambda->>Slack: Returns answer based on FAQ dataset
Slack->>User: Replies with the answer
Sequence diagram for the new training article submission flow:
sequenceDiagram
participant User
participant Slack
box gray AWS SAM
participant Lambda
participant S3Bucket
end
participant OpenAI
User->>Slack: /submit_train_article command
Slack->>Lambda: POST request with open modal request
Lambda->>Slack: Returns modal configuration
Slack->>User: Shows modal with form fields
User->>Slack: Fills in the form fields of the new article
Slack->>Lambda: POST request with the article
Lambda->>S3Bucket: Fetch FAQ datafile and text embeddings
S3Bucket->>Lambda: Returns data files
Lambda->>OpenAI: Compute text embedding for new article
OpenAI->>Lambda: Returns text embedding for new article
Lambda->>S3Bucket: Update FAQ CSV file and embeddings file
S3Bucket->>Lambda: Confirm update
Lambda->>Slack: Returns success message
Slack->>User: Replies with success message
.env
file at the root directory, according to the template .env.example
.OPENAI_API_KEY
environment variable.The following scopes are required (Configure in the "OAuth & Permissions" page > "Scopes" > "Bot Token Scopes"):
chat:write
commands
im:history
im:write
The following event subscriptions are required: (but you can't set these yet until the deployment of the AWS SAM infrastructure is done):
message.channels
message.groups
message.im
message.mpim
Enable "Allow users to send commands and messages from the messages tab” in the “App Home” settings.
Lastly, make sure to install the app to your workspace
Prepare the following environment varaibles into the .env
file
SLACK_BOT_TOKEN
SLACK_SIGNING_SECRET
Setup your shell for AWS credentials. There are various ways of doing so, and you may refer to this documnetation.
For example, you may run aws sso login --profile name-of-your-profile
if you have configured your AWS credentials with AWS Identity Center (originally named AWS SSO) before.
./deploy.sh
script, it will provision everything for you.After the deployment, you still need to manually upload the initial datafiles.
Prepare a file in .data/articles.csv
, with three columns (title, heading, content)
.
#!/bin/bash
cd function
export LOCAL_DATA_PATH=./
python3 -c 'from embedding import *; prepare_document_embeddings()'
Be sure to escape e.g. newline characters into \n
in the column
field.
Then, a file should be created at ./data/document_embeddings.csv
.
Upload both files onto the S3 bucket that was created by the CloudFormation template, at the following paths:
s3://$DATAFILE_S3_BUCKET/data/articles.csv
s3://$DATAFILE_S3_BUCKET/data/document_embeddings.csv
If you want to use the command line, you can run the following command:
aws s3 cp --recursive ./function/data/*.csv s3://$DATAFILE_S3_BUCKET/data/
That's it!
If you want to be a bit lazy and start with my sample data, just run the following command instead
aws s3 cp --recursive ./sample_data/*.csv s3://$DATAFILE_S3_BUCKET/data/
Outputs
tab of the deployed CloudFormation template (e.g. https://us-east-1.console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks
), copy the URL value of FunctionUrlEndpoint
.Go back to the config page of your custom Slack App, and paste it at
"Event Subscriptions" > "Enable Events" > "Request URL" and verify it.
/submit_train_article
commandIn addition you can also create a /submit_train_article
slack command so that your users can self-serve submit extra articles into the dataset. The handlers are defined in the following methods of lambda_function.py
: handle_submit_train_article_command
and handle_submit_train_article_submission
.
Features
> Slack Commands
> Create New Command
/submit_train_article
FunctionUrlEndpoint
This project is based on the following projects: