Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment.
High-level API: Abstracts the complexity of a Retrieval Augmented Generation (RAG) pipeline. Handles document ingestion, chat, and completions.
Low-level API: For advanced users to implement custom pipelines. Includes features like embeddings generation and contextual chunks retrieval.
Privacy is the key motivator! Private-AI addresses concerns in data-sensitive domains like healthcare and legal, ensuring your data stays under your control.
Private-Ai Installation Guide
Git clone Private-Ai repository:
git clone https://github.com/AryanVBW/Private-Ai
cd Private-Ai
Install Python 3.11 (or 3.12)
Using apt(Debian base linux like-kali,Ubantu etc. )
sudo apt-get install python3.11
Using pyenv:
pyenv install 3.11
pyenv local 3.11
Install Poetry for dependency management.
pip3 install poetry
Install make (OSX: brew install make
, Windows: choco install make
).
Install dependencies:
poetry install --with ui
Install extra dependencies for local execution:
poetry install --with local
Use the setup script to download embedding and LLM models:
poetry run python scripts/setup
make run
or poetry run python -m private_gpt
.private_gpt/components/llm/llm_component.py
.settings.yaml
.OSX: Build llama.cpp with Metal support.
CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python
Windows NVIDIA GPU: Install VS2022, CUDA toolkit, and run:
$env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
Linux NVIDIA GPU and Windows-WSL: Install CUDA toolkit and run:
CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
Note: If any issues, retry in verbose mode with -vvv
during installations.
Troubleshooting C++ Compiler:
FastAPI-Based API: Follows the OpenAI API standard, making it easy to integrate.
LlamaIndex Integration: Leverages LlamaIndex for the RAG pipeline, providing flexibility and extensibility.
Present and Future: Evolving into a gateway for generative AI models and primitives. Stay tuned for exciting new features!
Contributions are welcome! Check the ProjectBoard for ideas. Ensure code quality with format and typing checks (run make check
).
Supported by Qdrant, Fern, and LlamaIndex. Influenced by projects like LangChain, GPT4All, LlamaCpp, Chroma, and SentenceTransformers.
π Thank you for contributing to the future of private and powerful AI with Private-AI! π License: Apache-2.0
This is a modified version of PrivateGPT. All rights and licenses belong to the PrivateGPT team.
Β© 2023 PrivateGPT Developers. All rights reserved.