:wrench: Test and experiment with prompts, LLMs, and vector databases. :hammer:
Welcome to `prompttools` created by [Hegel AI](https://hegel-ai.com/)! This repo offers a set of open-source, self-hostable tools for experimenting with, testing, and evaluating LLMs, vector databases, and prompts. The core idea is to enable developers to evaluate using familiar interfaces like _code_, _notebooks_, and a local _playground_. In just a few lines of code, you can test your prompts and parameters across different models (whether you are using OpenAI, Anthropic, or LLaMA models). You can even evaluate the retrieval accuracy of vector databases. ```python from prompttools.experiment import OpenAIChatExperiment messages = [ [{"role": "user", "content": "Tell me a joke."},], [{"role": "user", "content": "Is 17077 a prime number?"},], ] models = ["gpt-3.5-turbo", "gpt-4"] temperatures = [0.0] openai_experiment = OpenAIChatExperiment(models, messages, temperature=temperatures) openai_experiment.run() openai_experiment.visualize() ``` ![image](img/demo.gif) To stay in touch with us about issues and future updates, join the [Discord](https://discord.gg/7KeRPNHGdJ). ## Quickstart To install `prompttools`, you can use `pip`: ``` pip install prompttools ``` You can run a simple example of a `prompttools` locally with the following ``` git clone https://github.com/hegelai/prompttools.git cd prompttools && jupyter notebook examples/notebooks/OpenAIChatExperiment.ipynb ``` You can also run the notebook in [Google Colab](https://colab.research.google.com/drive/1YVcpBew8EqbhXFN8P5NaFrOIqc1FKWeS?usp=sharing) ## PlaygroundIf you want to interact with `prompttools` using our playground interface, you can launch it with the following commands. You can run a simple example of a `prompttools` locally with the following ``` pip install notebook # If jupyter notebook has not been installed pip install prompttools ``` Then, clone the git repo and launch the streamlit app: ``` git clone https://github.com/hegelai/prompttools.git cd prompttools && streamlit run prompttools/playground/playground.py ``` You can also access a hosted version of the playground on the [Streamlit Community Cloud](https://prompttools.streamlit.app/). > Note: The hosted version does not support LlamaCpp ## Documentation Our [documentation website](https://prompttools.readthedocs.io/en/latest/index.html) contains the full API reference and more description of individual components. Check it out! ## Supported Integrations Here is a list of APIs that we support with our experiments: LLMs - OpenAI (Completion, ChatCompletion, Fine-tuned models) - **Supported** - LLaMA.Cpp (LLaMA 1, LLaMA 2) - **Supported** - HuggingFace (Hub API, Inference Endpoints) - **Supported** - Anthropic - **Supported** - Mistral AI - **Supported** - Google Gemini - **Supported** - Google PaLM (legacy) - **Supported** - Google Vertex AI - **Supported** - Azure OpenAI Service - **Supported** - Replicate - **Supported** - Ollama - _In Progress_ Vector Databases and Data Utility - Chroma - **Supported** - Weaviate - **Supported** - Qdrant - **Supported** - LanceDB - **Supported** - Milvus - Exploratory - Pinecone - **Supported** - Epsilla - _In Progress_ Frameworks - LangChain - **Supported** - MindsDB - **Supported** - LlamaIndex - Exploratory Computer Vision - Stable Diffusion - **Supported** - Replicate's hosted Stable Diffusion - **Supported** If you have any API that you'd like to see being supported soon, please open an issue or a PR to add it. Feel free to discuss in our Discord channel as well. ## Frequently Asked Questions (FAQs) 1. Will this library forward my LLM calls to a server before sending it to OpenAI, Anthropic, and etc.? - No, the source code will be executed on your machine. Any call to LLM APIs will be directly executed from your machine without any forwarding. 2. Does `prompttools` store my API keys or LLM inputs and outputs to a server? - No, all of those data stay on your local machine. We do not collect any PII (personally identifiable information). 3. How do I persist my results? - To persist the results of your tests and experiments, you can export your `Experiment` with the methods `to_csv`, `to_json`, `to_lora_json`, or `to_mongo_db`. We are building more persistence features and we will be happy to further discuss your use cases, pain points, and what export options may be useful for you. ## Sentry