wafflecomposite / langchain-ask-pdf-local

An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.
85 stars 8 forks source link
langchain llama llamacpp llm self-hosted stablevicuna

Ask Your PDF, locally

UI screenshot of Ask Your PDF
Answering question about 2303.12712 paper 7mb pdf file

This is an attempt to recreate Alejandro AO's langchain-ask-pdf (also check out his tutorial on YT) using open source models running locally.

It uses all-MiniLM-L6-v2 instead of OpenAI Embeddings, and StableVicuna-13B instead of OpenAI models.

It runs on the CPU, is impractically slow and was created more as an experiment, but I am still fairly happy with the results.

Requirements

GPU is not used and is not required.

You can squeeze it into 16 GB of RAM, but I recommend 24 GB or more.

Installation

Usage

Run streamlit run .\app.py

This should launch the UI in your default browser. Select a PDF file, send the question, wait patiently.