This repository contains scripts for a chatbot that leverages artificial intelligence to interact with users and gather insights from DataUSA.io. The chatbot aims to deliver meaningful insights directly from natural language queries, simplifying the process for users to access and analyze data.
api/
data/
Contains JSON files featuring few-shot examples tailored for the Language Model (LLM).
Also contains tables.json
which contains available cubes, with their descriptions, column names, and relevant details.
src/utils/
Houses all the main scripts to run the chatbot.
Subfolders:
api_data_request/
data_analysis/
helpers/
tables.json
format needed to run the chat.preprocessors/
table_selection/
Matches the user's Natural Language Query (NLQ) to the table that should contain the relevant data to give an answer. To achieve this there are 3 options:
Option 1: get_relevant_tables_from_database()
Option 2: get_relevant_tables_from_LM()
Option 3: request_tables_to_lm_from_db()
Option 4: [in progress]
All the above functions return the name of the most relevant table. The app currenty works with Option 3.
Builds a prompt that includes the table to query (with its description and column names) and the NLQ given by the user.
Requests the LM to return a comment explaining why the chosen variables, measures, and filters can answer the query, along with a JSON containing these parameters with the following structure:
{
"variables": ["Origin State", "Destination State"],
"measures": ["Thousands Of Tons"],
"filters": ["Year = 2020", "SCTG2 = Coal", "Origin State = New York", "Destination State = California"]
}
Extracts the JSON from the LM's output string.
Instantiates an ApiBuilder object and sets the variables, measures, and cuts provided by the LLM as attributes using the class methods.
For the cuts, a similarity search is done over the corresponding dimension members of the cube to extract their ids from the database (with the cuts_processing()
function).
The API URL (for Mondrian or Tesseract) is built using the processed cuts, drilldowns and measures obtained from previous steps by running the build_url()
method.
The data is retrieved from the API using the fetch_data()
method and stored in a pandas dataframe.
Currently, the cubes available to be queried by the chatbot are:
In order to add one cube, the steps are:
Add the cube to the tables.json
file. The following fields must be filled:
name
api (Tesseract or Mondrian)
description
measures:
{
"name": "Millions Of Dollars",
"description": "value in millions of dollars of a shipment"
}
dimensions
{
"name": "Millions Of Dollars",
"description": "value in millions of dollars of a certain shipment."
}
dimensions
Add each hierarchy separately, filling the following fields for each:
{
"name": "Time",
"description": "Periodicity of the data (monthly or annual).",
"hierarchies": [
{
"name": "Month and Year",
"description": "'Month and Year' has the format YYYYMM (example March of 2015 is 201503)",
"levels": [
"Year",
"Month and Year"
]
}
]
"name": "Time",
"description": "Periodicity of the data (monthly or annual).",
"hierarchies": [
{
"name": "Month and Year",
"description": "'Month and Year' has the format YYYYMM (example March of 2015 is 201503)",
"levels": [
"Year",
"Month and Year"
]
}
]
}
Add the cube to the database (datausa_tables.cubes), filling the following columns (you can use the cubes_to_db.py
script):
SentenceTransformer('multi-qa-MiniLM-L6-cos-v1')
model)Add drilldown members & ids to the db (datausa_drilldowns.drilldowns)
drilldowns_to_db.py
script. During execution, the code will prompt for the API URL to fetch the drilldown members and IDs. Then, it will request the measure name in order to remove it from the dataframe before loading the data to the database.To add all the cubes of a project automatically, they can be mapped from the tesseract schema json to the custom format needed in the app. To do this follow these steps:
Retrieve the tesseract schema json (for example this one) and store it in the /helpers
folder.
Run the following command in the terminal (replacing the file names):
python tesseract_schema_mapping.py <input.json> <output.json>
On local development, you can run the following code:
uvicorn main:app --reload
cd api/
docker build -t datausa-chat:<tag> .
create a .env file with the required env variables
cd api/
docker run --env-file=./.env -p 80:80 datausa-chat:<tag>