vespperhq / vespper

Open-source AI copilot that lets you chat with your observability data and code 🧙‍♂️
https://www.vespper.com/?utm_source=github
Apache License 2.0
279 stars 34 forks source link
aiops alerts chatops-ai contributions-welcome devtools first-contributions hacktoberfest hacktoberfest-accepted hacktoberfest2024 incident incident-response incident-response-tooling llm llm-agent metrics monitoring observability oncall-engineers site-reliability-engineering traces

Vespper - open-source AI on-call developer

Vespper-logo


Docs · Demo · Report Bug · Feature Request · Blog · Slack


Apache 2.0 License main-workflow code style: prettier slack-logo

Note: If you want to use vespper for your team or for your organisation please reach out to us. This open-source project is suited for single individual use. Any advanced investigation features will be under vespper-ee.

Overview 💫

Vespper is an AI-powered on-call engineer. It can automatically jump into incidents & alerts with you, and provide you useful & contextual insights and RCA in real time.

Why ❓

Most people don't like to do on-call shifts. It requires engineers to be swift and solve problems quickly. Moreover, it takes time to reach to the root cause of the problem. That's why we developed Vespper. We believe Gen AI can help on-call developers solve issues faster.

Table of Contents

Key features 🎯

Demo 🎥

Checkout our demo video to see Vespper in action.

Getting started 🚀

In order to run Vespper, you need to clone the repo & run the app using Docker Compose.

Prerequisites 📜

Ensure you have the following installed:

Quick installation 🏎️

You can find the installation video here.

  1. Clone the repository:

    git clone git@github.com:vespper/vespper.git && cd vespper
  2. Configure LiteLLM Proxy Server:

    We use LiteLLM Proxy Server to interact with 100+ of LLMs in a unified interface (OpenAI interface).

    1. Copy the example files:

      cp config/litellm/.env.example config/litellm/.env
      cp config/litellm/config.example.yaml config/litellm/config.yaml
    2. Define your OpenAI key and place it inside config/litellm/.env as OPENAI_API_KEY. You can get your API key here. Rest assured, you won't be charged unless you use the API. For more details on pricing, check here.

  3. Copy the .env.example file:

    cp .env.example .env
  4. Open the .env file in your favorite editor (vim, vscode, emacs, etc):

    vim .env # or emacs or vscode or nano
  5. Update these variables:

    • SLACK_BOT_TOKEN, SLACK_APP_TOKEN and SLACK_SIGNING_SECRET - These variables are needed in order to talk to Vespper on Slack. Please follow this guide to create a new Slack app in your organization.

    • (Optional) SMTP_CONNECTION_URL - This variable is needed in order to invite new members to your Vespper organization via email and allow them to use the bot. It's not mandatory if you just want to test Vespper and play with it. If you do want to send invites to your team members, you can use a service like SendGrid/Mailgun. Should follow this pattern: smtp://username:password@domain:port.

  6. Launch the project:

    docker compose up -d

That's it. You should be able to visit Vespper's dashboard in http://localhost:5173. Simply create a user (with the same e-mail as the one in your Slack user) and start to configure your organization. If something does not work for you, please checkout our troubleshooting or reach out to us via our support channels.

The next steps are to configure your organization a bit more (connect incident management tools, build a knowledge base, etc). Head over to the connect & configure section in our docs for more information 💫

Using DockerHub images

If you want, you can pull our Docker images from DockerHub instead of cloning the repo & building from scratch.

In order to do that, follow these steps:

  1. Download configuration files:

    curl https://raw.githubusercontent.com/vespper/vespper/main/tools/scripts/download_env_files.sh | sh
  2. Follow steps 2 and 5 above to configure LiteLLM Proxy and your .env file respectively. Namely, you'd need to configure your OpenAI key at config/litellm/.env and configure your Slack credentials in the root .env.

  3. Spin up the environment using docker compose:

    curl https://raw.githubusercontent.com/vespper/vespper/main/tools/scripts/start.sh | sh

That's it 💫 You should be able to visit Vespper's dashboard in http://localhost:5173.

Updating Vespper 🧙‍♂️

  1. Pull the latest changes:

    git pull
  2. Rebuild images:

    docker-compose up --build -d

Deployment ☁️

Visit our example guides in order to deploy Vespper to your cloud.

Visualize Knowledge Base 🗺️

We use ChromaDB as our vector DB. We also use vector admin in order to see the ingested documents. To use vector admin, simply run this command:

docker compose up vector-admin -d

This command starts vector-admin at port 3001. Head over to http://localhost:3001 and configure your local ChromaDB. Note: Since vector-admin runs inside a docker container, in the "host" field make sure to insert http://host.docker.internal:8000 instead of http://localhost:8000. This is because "localhost" doesn't refer to the host inside the container itself.

Moreover, in the "API Header & Key", you'd need to put "X-Chroma-Token" as the header and the value you have inside .env CHROMA_SERVER_AUTHN_CREDENTIALS as the value.

To learn how to use VectorAdmin, visit the docs.

Support and feedback 👷‍♀️

In order of preference the best way to communicate with us:

Contributing to Vespper ⛑️

If you're interested in contributing to Vespper, checkout our CONTRIBUTING.md file 💫 🧙‍♂️

Troubleshooting ⚒️

If you encounter any problems/errors/issues with Vespper, checkout our troubleshooting guide. We try to update it regularly, and fix some of the urgent problems there as soon as possible.

Moreover, feel free to reach out to us at our support channels.

Telemetry 🔢

By default, Vespper automatically sends basic usage statistics from self-hosted instances to our server via PostHog.

This allows us to:

Rest assured, the data collected is not shared with third parties and does not include any sensitive information. We aim to be transparent, and you can review the specific data we collect here.

If you prefer not to participate, you can easily opt-out by setting TELEMETRY_ENABLED=false inside your .env.

License 📃

This project is licensed under the Apache 2.0 license - see the LICENSE file for details

Learn more 🔍

Check out the official website at https://vespper.com for more information.

Contributors ✨

Built with ❤️ by Dudu & Topaz

Dudu: Github, Twitter

Topaz: Github, Twitter