josh-sea / ragpal

0 stars 0 forks source link

RAG AI App Readme

Welcome to the RAG AI app repository! This app integrates with various AI models to provide insightful responses based on the provided context, making it an excellent tool for exploring AI's capabilities. The app also features integration with Gmail for fetching and processing emails, alongside functionality for handling different types of documents.

This RAG pipeline uses Llamaindex tooling, Qdrant as a vectorDB, HuggingFace for embedding (bge-small), and your choice of LLM for the open context response. You can modify these things but I found this combination to be relatively simple to implement, well documented, and opensource. If you run your QdrantDB locally in docker or selfhosted, you use huggingface, and run your LLM locally in LMStudio or equivalent, this entire rag pipeline is closed to your local environment. I have not implemented any kind of logging systems or external tracking. Certainly as this is built out I would like to optionally enable logging for internal purposes but as of now 3/15/24, this could be run entirely locally!

Contribution Guidelines

I welcome contributions from the community! If you wish to contribute:

  1. Please create an issue for any bugs, feature requests, or other enhancements. This helps me keep track of what needs attention.
  2. Feel free to fork the repository and create pull requests. Make sure to create your branch from main. For bugs, features, refactors, or anything new, use a descriptive branch name, e.g., git checkout -b feature/add-new-integration.
  3. All pull requests require approval before being merged. This ensures code quality and consistency.
  4. In the future, I plan to introduce a develop branch as the main staging area. Changes will be merged there first before making their way to main.

Getting Started

To get started with the RAG AI app:

  1. Clone the repository to your local machine.
  2. Copy the example environment file and configure it with your API keys and other settings: $ cp example.env .env
  3. Update the .env file with your Qdrant API key and URL. Alternatively, update app.py to use local memory for development: qdrant_client.QdrantClient(":memory:")
  4. Add API keys for OpenAI, Anthropic, and Mistral in the .env file as needed.
  5. Install required dependencies: $ pip install -r requirements.txt
  6. Serve the app locally using Streamlit: $ streamlit run app.py

Email Integration

For email functionality, you'll need to:

Adding Documents

When adding documents:

Notes

Hope to See Your Contributions!

This project is open to contributions, and we're excited to see how it grows with the community's input. Whether it's bug fixes, new features, or improvements, your contributions are welcome!