Vignana-Jyothi / kp-gen-ai

MIT License
0 stars 0 forks source link

[Experiment] Build Q-A System trained on LangChain. #18

Open head-iie-vnr opened 4 days ago

head-iie-vnr commented 4 days ago

Use Github https://github.com/alejandro-ao/ask-multiple-pdfs

Add OPEN_AI_Keys

Run the system

Open browser: Local URL: http://localhost:8501 Network URL: http://192.168.0.106:8501

Observed output: image

head-iie-vnr commented 4 days ago

This code implements a Streamlit application that allows users to upload multiple PDF documents and ask questions about the content of those documents. Here's a breakdown of what each part of the code does:

Imports

import streamlit as st
from dotenv import load_dotenv
from PyPDF2 import PdfReader
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings, HuggingFaceInstructEmbeddings
from langchain.vectorstores import FAISS
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from htmlTemplates import css, bot_template, user_template
from langchain.llms import HuggingFaceHub

Functions

get_pdf_text(pdf_docs)

Extracts text from the uploaded PDF documents.

def get_pdf_text(pdf_docs):
    text = ""
    for pdf in pdf_docs:
        pdf_reader = PdfReader(pdf)
        for page in pdf_reader.pages:
            text += page.extract_text()
    return text

get_text_chunks(text)

Splits the extracted text into smaller chunks.

def get_text_chunks(text):
    text_splitter = CharacterTextSplitter(
        separator="\n",
        chunk_size=1000,
        chunk_overlap=200,
        length_function=len
    )
    chunks = text_splitter.split_text(text)
    return chunks

get_vectorstore(text_chunks)

Generates embeddings for the text chunks and creates a vector store using FAISS.

def get_vectorstore(text_chunks):
    embeddings = OpenAIEmbeddings()
    vectorstore = FAISS.from_texts(texts=text_chunks, embedding=embeddings)
    return vectorstore

get_conversation_chain(vectorstore)

Creates a conversational retrieval chain using the vector store and a language model.

def get_conversation_chain(vectorstore):
    llm = ChatOpenAI()
    memory = ConversationBufferMemory(
        memory_key='chat_history', return_messages=True)
    conversation_chain = ConversationalRetrievalChain.from_llm(
        llm=llm,
        retriever=vectorstore.as_retriever(),
        memory=memory
    )
    return conversation_chain

handle_userinput(user_question)

Handles user input by generating responses using the conversation chain and updating the chat history.

def handle_userinput(user_question):
    response = st.session_state.conversation({'question': user_question})
    st.session_state.chat_history = response['chat_history']

    for i, message in enumerate(st.session_state.chat_history):
        if i % 2 == 0:
            st.write(user_template.replace(
                "{{MSG}}", message.content), unsafe_allow_html=True)
        else:
            st.write(bot_template.replace(
                "{{MSG}}", message.content), unsafe_allow_html=True)

Main Application Logic

main()

Sets up the Streamlit application, handles PDF uploads, and processes the PDFs.

def main():
    load_dotenv()
    st.set_page_config(page_title="Chat with multiple PDFs",
                       page_icon=":books:")
    st.write(css, unsafe_allow_html=True)

    if "conversation" not in st.session_state:
        st.session_state.conversation = None
    if "chat_history" not in st.session_state:
        st.session_state.chat_history = None

    st.header("Chat with multiple PDFs :books:")
    user_question = st.text_input("Ask a question about your documents:")
    if user_question:
        handle_userinput(user_question)

    with st.sidebar:
        st.subheader("Your documents")
        pdf_docs = st.file_uploader(
            "Upload your PDFs here and click on 'Process'", accept_multiple_files=True)
        if st.button("Process"):
            with st.spinner("Processing"):
                # get pdf text
                raw_text = get_pdf_text(pdf_docs)

                # get the text chunks
                text_chunks = get_text_chunks(raw_text)

                # create vector store
                vectorstore = get_vectorstore(text_chunks)

                # create conversation chain
                st.session_state.conversation = get_conversation_chain(
                    vectorstore)

if __name__ == '__main__':
    main()

Explanation of the Main Logic:

  1. Environment Setup:

    • load_dotenv(): Loads environment variables from a .env file.
    • st.set_page_config(): Configures the Streamlit app page settings.
    • st.write(css, unsafe_allow_html=True): Applies custom CSS styling.
  2. Session State Initialization:

    • Initializes session state variables for the conversation chain and chat history if they don't exist.
  3. User Interface:

    • Header: Displays the main header of the application.
    • User Question Input: Provides a text input for the user to ask questions about the uploaded documents.
    • Sidebar: Allows users to upload multiple PDF files and click "Process" to start processing.
  4. Processing Logic:

    • When the user clicks "Process", the application:
      • Extracts text from the uploaded PDFs.
      • Splits the text into chunks.
      • Creates a vector store from the text chunks.
      • Initializes a conversational retrieval chain using the vector store.
  5. Handle User Input:

    • When the user asks a question, the handle_userinput function generates a response and updates the chat history, displaying it in the user interface.

This application effectively allows users to interact with the content of multiple PDFs through a conversational interface, leveraging powerful language models and vector search capabilities.