Experiments exploring the US Snowmass Process documents using LLM
Getting from a list of documents to a working chat-bot follows the below steps:
chatter -c snowmass/snowmass.yaml cache download
commandchatter -c snowmass/snowmass.yaml cache set <dir>
to set a custom directory and copy down all the previously downloaded files to avoid having to hit the archive for anything but metadata. Better yet, copy down the pickle
files.chatter keys set openai <key>
to set the key.chatter -c snowmass/snowmass.yaml vector populate
chatter -c snowmass/snowmass.yaml query ask "What does the MATHUSLA experiment do?
ask
to find
to see what chunks of text are used by the LLM to answer your question.chatter -c snowmass/snowmass.yaml query ask -q "gpt-4-turbo-preview" "What does the MATHUSLA experiment do?"
, or set it as the default:yaml
file contains the answers to a list of questions
chatter -c snowmass/snowmass.yaml questions --questions_file snowmass/snowmass-questions.yaml ask "Default config, but updated code" snowmass/snowmass-v1.0-update.yaml
chatter -c snowmass/snowmass.yaml questions --questions_file snowmass/snowmass-questions.yaml compare snowmass/snowmass-v1.0.yaml snowmass/snowmass-v1.0-update.yaml
compare
sub-command.Use chatter default list
to see what parameters are set by default when invoking the commands.
Included is a config file for snowmass
.
Paper Sources:
pip install -e .[test]
from the root directory