DistiLlama
What is DistiLlama?
DistiLlama is a Chrome extension that leverages locally running LLM perform following tasks.
One of the things that I was experimenting with is how to use a locally running LLM instance for various tasks and summarization (tl;dr) was on the top of my list. It was key to have all calls to LLM be local and all the data to stay private.
This project utilizes Ollama as the locally running LLM instance. Ollama is a great project that is easy to setup and use. I highly recommend checking it out.
To generate the summary I am using the following approach:
- Grab the current active tab id
- Use Readability to extract the text content from the page. In my experiments it was clear that the quality of the summary was much better when using Readability as it removed a lot of un-necessary content from the page.
- Use LangChain (LangChain.js) to summarize the text content.
- Display the summary in a popup window.
How to use DistiLlama?
-
Prerequisites:
- Install Ollama you can also choose to run Ollama in a Docker container.
- Start Ollama using the following command:
OLLAMA_ORIGINS=* OLLAMA_HOST=127.0.0.1:11435 ollama serve
- In another terminal you can run
ollama pull llama2:latest
or ollama pull mistral:latest
- Choice of model depends on your use case. Here are the models supported by Ollama https://ollama.ai/library
- Make sure you set OLLAMA_ORIGINS=* for the Ollama environment by following instructions here
-
Clone this repo
- Install pnpm
npm install -g pnpm
- run
pnpm install
- run
pnpm dev
- Open Chrome and navigate to
chrome://extensions/
- Enable developer mode (if not already enabled)
- Click on
Load unpacked
and select the dist
folder from the base of the cloned project.
- You should see the DistiLlama added to your Chrome extensions.
- You may want to pin the extension to your Chrome toolbar for easy access.
Demo
Chat with LLM
Chat with Documents (PDF)
Chat with Web Page
Summarization
TODOS
- [ ] Make the summarization chain configurable
- [x] Make LLM model configurable
- [ ] Save summary in local storage
- [ ] Improve the UI (not an expert in this area but will try to learn)
- [ ] Add TTS support
- [ ] Check out performance with different tuned prompts
- [x] Extend to chat with the page (use embeddings and LLMs for RAG)
- [x] Use transformers.js for local in browser embeddings and Voy for the storage similar to this Building LLM-Powered Web Apps with Client-Side Technology
- [ ] Focus on improving the quality of the summarization and chat
- [ ] Multimodal support
References and Inspiration