marawanxmamdouh / ConvoNerd

MIT License
7 stars 5 forks source link

ConvoNerd

AI Chat Assistant is an open-source tool that enables natural language conversations with a wide range of data sources, from documents to web links, text data and even YouTube videos. With the power of state-of-the-art language models, including Retrieval-Augmented Generation (RAG), this tool empowers you to ask questions, extract insights, and explore your data interactively. Enjoy the convenience of a user-friendly interface and the flexibility to choose your language model, all while running efficiently on standard CPU hardware.

https://github.com/marawanxmamdouh/ConvoNerd/assets/55720454/72bc8897-d2d8-439e-8774-ff2813377949

Table of Contents

Key Features

CPU Optimization

One of our primary objectives is to make AI Chat Assistant accessible to a wide range of users. While some language models demand GPU resources, we've optimized this project to efficiently run on CPU. This means you can use AI Chat Assistant on most standard hardware configurations, eliminating the need for specialized hardware, making it more accessible and cost-effective.

By providing an implementation of RAG from scratch and ensuring CPU compatibility, AI Chat Assistant offers a robust and accessible solution for engaging in meaningful conversations with your data.

Get Started

Prerequisites

Installation

I. Using Poetry

  1. Clone the repository to your local machine:

    git clone https://github.com/marawanxmamdouh/ConvoNerd.git
  2. Change the working directory to the project folder:

    cd ConvoNerd
  3. Install Poetry if you haven't already:

    pip install poetry
  4. Use Poetry to install the project dependencies from the pyproject.toml file:

    poetry install
  5. Activate the virtual environment created by Poetry (this step may vary depending on your shell):

    • On Windows:

      poetry shell
    • On Unix-based systems (Linux/macOS):

      source $(poetry env info --path)/bin/activate

II. Using PIP

III. Using Docker

IV. Using Colab with GPU runtime

Usage

Now that you have set up ConvoNerd, you can run the application by executing:

streamlit run app.py

This will start a local Streamlit server and launch the ConvoNerd application in your default web browser by open http://localhost:8501 to access the ConvoNerd application.

Pipeline Simple Overview:

Data Source

Question Answering

To Do

Contributing

We welcome contributions from the community. If you have ideas for improvements, bug fixes, or suggestions, please consider contributing to the project.

License

ConvoNerd is licensed under the MIT License.

Contact

If you have questions or feedback, feel free to open an issue on this repository.

We hope ConvoNerd empowers you to have meaningful conversations with your data. Enjoy exploring and enhancing your data-driven insights!