alejandro-ao / ask-multiple-pdfs

A Langchain app that allows you to chat with multiple PDFs
1.7k stars 957 forks source link

help regarding OPENAI response manipulation #47

Closed Dipankar1997161 closed 1 year ago

Dipankar1997161 commented 1 year ago

hey @alejandro-ao,

I used your project and tuned it according to my gui and I am getting correct response.

however as we know, at times, it does not know the answers if not present within the file... is there a way to only display answers when found inside the file and not the autoAI message 'answer not found type??'
I have 3 files hence it displays 3.. but I only wish to display the answer when it is found correctly

Screenshot 2023-11-20 at 02 41 16

here is the code I wrote:

def get_conversation_chain(vectorstore):
    llm = ChatOpenAI(model_name = 'gpt-3.5-turbo')

    memory = ConversationBufferMemory(
        memory_key='chat_history', return_messages=True)
    conversation_chain = ConversationalRetrievalChain.from_llm(
        llm=llm,
        retriever=vectorstore.as_retriever(),
        memory=memory
    )
    return conversation_chain
def handle_userinput(conversation, user_question, pdf_file):
    response = conversation({'question': user_question})
    chat_history = response['chat_history']

    for i, message in enumerate(chat_history):

        if i % 2 == 0:
            print(f"Question:  {green}{message.content}")
        else:
            print(f"Answer:  {green}{message.content}")
  def main():
      print(f"{yellow}---------------------------------------------------------------------------------")
      print('Welcome to the Mulit-chat. You are now ready to start interacting with your documents')
      print('---------------------------------------------------------------------------------')

      load_dotenv('.env')
      root = "/home/ndip/QnA_model/Pdf_files"
      conversation = None
      chat_history = None
      while True:
          user_question = input(f"{white}Ask a question about your documents:")
          for file in os.listdir(root):
              if file.endswith(".pdf"):
                  pdf_path = os.path.join(root, file)
                  raw_text = get_pdf_text(pdf_path)
                  text_chunks = get_text_chunks(raw_text)
                  vectorstore = get_vectorstore(text_chunks)

                  conversation = get_conversation_chain(vectorstore)
                  handle_userinput(conversation, user_question, file)

  if __name__ == '__main__':
      main()
Dipankar1997161 commented 1 year ago

Solved it

tomasbrogueira commented 11 months ago

What did you do?