Farzad-R / LLM-Zero-to-Hundred

This repository contains different LLM chatbot projects (RAG, LLM agents, etc.) and well-known techniques for training and fine tuning LLMs.
296 stars 155 forks source link

LLM-Zero-to-Hundred

LOGO

This repository showcases various applications of LLM chatbots and provides comprehensive insights into established methodologies for training and fine-tuning Language Models.

List of projects:

List of tutorials

General structure of the projects:

Project-folder
  ├── README.md           <- The top-level README for developers using this project.
  ├── HELPER.md           <- Contains extra information that might be useful to know for executing the project.
  ├── .env                <- dotenv file for local configuration.
  ├── .here               <- Marker for project root.
  ├── configs             <- Holds yml files for project configs
  ├── data                <- Contains the sample data for the project.
  ├── src                 <- Contains the source code(s) for executing the project.
  |   └── utils           <- Contains all the necesssary project's modules. 
  └── images              <- Contains all the images used in the user interface and the README file. 

NOTE: This is the general structure of the projects, however there might be small changes duo to the specific needs of each project.

Project description:

Advanced Multimodal Chatbot:

Features: - ChatGPT-like interaction: Chatbot can act as a normal AI assistant. - RAG (Retrieval Augmented Generation) capabilities: The chatbot can perform RAG in 3 different ways 1. With preprocessed documents 2. Documents that the user uploads while using the chatbot 3. Any webiste that the user requests. - Image generation: Chatbot utilizes a stable diffusion model to generate images. - Image understanding: Chatbot Understands the content of images and can answer user's question based on the content of the image using the LLava model. - DuckDuckGo integration: Access the DuckDuckGo search engine to provide answers based on search results when needed. - Summarization: Summarize website content or documents upon user request. - Text and voice interaction: Interact with chatbot through both text and voice inputs. - Memory: The GPT models in the chatbot also have access to the memory (user's previous queries during the current session). NOTE: This chatbot was built on top of RAG-GPT and WebRAGQuery projects. **YouTube video:**: To be added

Open-Source-RAG-GEMMA:

In this project, I demonstrate how an open source LLM can be deployed on-prem. For that, I took RAG-GPT project and convert it into a fully open source RAG chatbot. The open source chatbot is designed using Google Gemma7B LLm and BAAI/bge-large-en as the embedding model. **YouTube video:** [Link](https://youtu.be/6dyz2M_UWLw?si=phnTb9GRPx8RXFYp)

RAGMaster-LlamaIndex-vs-Langchain:

In this project, I compare the performance of `5` famous RAG techniques which have been proposed by Langchain and Llama-index. The test is being done on `40` questions on `5` different documents. Moreover, the projects provides `2` separate RAG chatbots that offer `8` RAG techniques from these two frameworks. **YouTube video:** [Link](https://www.youtube.com/watch?v=nze2ZFj7FCk&lc=UgxmsrbI9fLWmkgvD3N4AaABAg)

Fine-tuning LLMs:

In this project, we use a fictional company called Cubetriangle and design the pipeline to process its raw data, finetune `3` large language models (LLMs) on it, and design a chatbot using the best model. **YouTube video:** [Link](https://www.youtube.com/watch?v=_g4o21A6AY8&t=1154s) **Libraries:** [huggingface](https://pypi.org/project/duckduckgo-search/) - [OpenAI](https://platform.openai.com/docs/models/overview) - [chainlit](https://docs.chainlit.io/get-started/overview)

WebGPT:

WebGPT is a powerful tool enabling users to pose questions that require internet searches. Leveraging GPT models: * It identifies and executes the most relevant given Python functions in response to user queries. * The second GPT model generates responses by combining user queries with content retrieved from the web search engine. * The user-friendly interface is built using Streamlit. * The web search supports diverse searches such as text, news, PDFs, images, videos, maps, and instant responses. * Overcoming knowledge-cutoff limitations, the chatbot delivers answers based on the latest internet content. **YouTube video:** [Link](https://www.youtube.com/watch?v=55bztmEzAYU&t=739s) **Libraries:** [OpenAI](https://platform.openai.com/docs/models/overview) (It uses GPT model's function calling capability) - [duckduckgo-search](https://pypi.org/project/duckduckgo-search/) - [streamlit](https://docs.streamlit.io/)

RAG-GPT:

RAG-GPT is a chatbot that enables you to chat with your documents (PDFs and Doc). The chatbot offers versatile usage through three distinct methods: 1. **Offline Documents**: Engage with documents that you've pre-processed and vectorized. These documents can be seamlessly integrated into your chat sessions. 2. **Real-time Uploads:** Easily upload documents during your chat sessions, allowing the chatbot to process and respond to the content on-the-fly. 3. **Summarization Requests:** Request the chatbot to provide a comprehensive summary of an entire PDF or document in a single interaction, streamlining information retrieval. **Libraries:** [OpenAI](https://platform.openai.com/docs/models/overview) - [Langchain](https://python.langchain.com/docs/get_started/quickstart) - [ChromaDB](https://www.trychroma.com/) - [Gradio](https://www.gradio.app/guides/quickstart) **YouTube video:** [Link](https://www.youtube.com/watch?v=1FERFfut4Uw&t=3s)

WebRAGQuery: (Combining WebGPT and RAG-GPT)

WebRAGQuery is a chatbot that is built on the foundations of WebGPT and RAG-GPT, this project gives the users the ability to utilize the LLM's pretrained knowledge, Duckduckgo web search engine, and chatting with websites. Key Features:
* **Intelligent Decision-Making:** The GPT model intelligently decides whether to answer user queries based on its internal knowledge base or execute relevant Python functions and access the internet. * **Web-Integrated Responses:** The second GPT model seamlessly combines user queries with content retrieved from web searches, providing rich and context-aware responses. WebRAGQuery supports a variety of searches, including text, news, PDFs, images, videos, maps, and instant responses. * **Website-Specific Queries:** When users inquire about a specific website, the model dynamically calls a function to load, vectorize, and create a vectordb from the site's content and therefore, gives the user the ability to ask questions about the content of the website. Users can query the content of the vectordb by starting their questions with ** and exit the RAG conversation by omitting ** from the query. ** can trigger the third GPT model for RAG Q&A. * **Website summarization:** On demand, this chatbot is able to go through a website and provide the user with a summary of the content. * **Memory:** WebRAGQuery boasts a memory feature that allows it to retain information about user interactions. This enables a more coherent and context-aware conversation by keeping track of previous questions and answers. * **Chainlit Interface:** The user-friendly interface is built using Chainlit. * **Overcoming Knowledge-Cutoff Limitations:** This chatbot transcends knowledge-cutoff limitations, providing answers based on the latest internet content and even allowing users to ask questions about webpage content. **YouTube video:** [Link](https://www.youtube.com/watch?v=KoWjy5PZdX0&t=266s) **Libraries:** [OpenAI](https://platform.openai.com/docs/models/overview) - [Langchain](https://python.langchain.com/docs/get_started/quickstart) - [ChromaDB](https://www.trychroma.com/) - [chainlit](https://docs.chainlit.io/get-started/overview)

## Tutorial description:

LLM Function Calling Tutorial:

This project showcases the capacity of GPT models to produce executable functions in JSON format. It illustrates this capability through a practical example involving the utilization of Python with the GPT model. Libraries: [OpenAI](https://platform.openai.com/docs/models/overview) **YouTube video:** [Link](https://www.youtube.com/watch?v=P3bNGBTDiKM&t=3s)

Visualizing Text Vectorization:

This project provides a comprehensive visualization of text vectorization and demonstrates the power of vector search. It further explores the vectorization on both OpenAi `text-embedding-ada-002` and the open source `BAAI/bge-large-zh-v1.5` model. Libraries: [OpenAI](https://platform.openai.com/docs/models/overview) - [HuggingFace](https://huggingface.co/BAAI/bge-large-zh-v1.5) **YouTube video:** [Link](https://www.youtube.com/watch?v=sxBr_afsvb0&t=454s)

**Slides:** [Link](https://github.com/Farzad-R/LLM-Zero-to-Hundred/blob/master/presentation/slides.pdf) ## Running each project To run the projects, you will need to install the required libraries. Follow the steps below to get started: 1. Clone the repository and navigate to the project directory. ``` git clone https://github.com/Farzad-R/LLM-Zero-to-Hundred.git cd LLM-Zero-to-Hundred ``` 2. Create a new virtual environment using a tool like virtualenv or conda, and activate the environment: ``` conda create --name projectenv python=3.11 conda activate projectenv ``` 3. Change directory to your desired project and install the required libraries using the following commands: Ex: ``` cd WebRAGQuery pip install -r requirements.txt ```