langchain-ai / langchain

πŸ¦œπŸ”— Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.62k stars 15.52k forks source link

Issue: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead. warn_deprecated( #15741

Closed ArslanKAS closed 7 months ago

ArslanKAS commented 10 months ago

Issue you'd like to raise.

Everything was working fine but now suddenly I'm receiving all sorts of LangChain Deprecation issues. I installed the langchain_openai package and also installed langchain_community package too and replaced all the imports with the suggested ones in the error. It went well but now I'm stuck at this issue

The error is: /home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:115: LangChainDeprecationWarning: The functioncallwas deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead. warn_deprecated(

In my code I've replaced all "run" to "invoke" but don't know why this warning is coming up. I'm also using a LangChain Summarizer and I checked the documentation and it's exactly the way it is in the documentation. I don't know how to get rid of that deprecation warning now. I don't want to suppress the warning, I want to resolve it so it won't cause any issue in the future.

This is the only code that I've related to LangChain:


# Langchain Libraries
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.docstore.document import Document
from langchain_community.callbacks import get_openai_callback
from langchain.text_splitter import TokenTextSplitter
from langchain.chains.summarize import load_summarize_chain
from langchain_core.output_parsers import StrOutputParser

   # ------------------------------------------------------------
    # General ChatGPT function that's required for all the Call-type Prompts 
    def chatgpt_function(prompt, transcript):
        model_kwargs={"seed":235, "top_p":0.01}
        llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0, model_kwargs=model_kwargs, max_tokens=tokens)
        template = """
        {prompt}

        Call Transcript: ```{text}```
        """
        prompt_main = PromptTemplate(
              input_variables=["prompt", "text"],
              template=template,)

        with get_openai_callback() as cb:
#             llm_chain = LLMChain(llm=llm, prompt=prompt_main)
            output_parser = StrOutputParser()
            llm_chain = prompt_main | llm | output_parser
            all_text = str(template) + str(prompt) + str(transcript)
            threshold = (llm.get_num_tokens(text=all_text) + tokens)
#             print("Total Tokens:",threshold)
            if int(threshold) <= 4000:
                chatgpt_output = llm_chain.invoke({"prompt":prompt, "text":transcript})
            else:
                transcript_ = token_limiter(transcript)
                chatgpt_output = llm_chain.invoke({"prompt":prompt, "text":transcript_})
        return chatgpt_output

    # -------------------------------------------------------
    # Function to get refined summary if Transcript is long
    def token_limiter(transcript):
        text_splitter = TokenTextSplitter(chunk_size=3000, chunk_overlap=200)
        texts = text_splitter.split_text(transcript)
        docs = [Document(page_content=text) for text in texts]

        question_prompt_template = """
        I'm providing you a call transcript refined summary enclosed in triple backticks. summarize it furter.

        Call Transcript: ```{text}```

        Provide me a summary transcript. do not add add any title/ heading like summary or anything else. just give summary text.
        """

        question_prompt = PromptTemplate(
            template=question_prompt_template, input_variables=["text"]
        )

        refine_prompt_template = """
        Write a summary of the following text enclosed in triple backticks (```).

        ```{text}```

        """

        refine_prompt = PromptTemplate(
            template=refine_prompt_template, input_variables=["text"]
        )

        llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0, max_tokens=800)
        refine_chain = load_summarize_chain(
            llm,
            chain_type="refine",
            question_prompt=question_prompt,
            refine_prompt=refine_prompt,
            return_intermediate_steps=True,
        )

        summary_refine = refine_chain({"input_documents": docs}, return_only_outputs=True)
        return summary_refine['output_text']```

### Suggestion:

Please let me know what I need to change in my code to get rid of that Deprecation warning. Thank you
dosubot[bot] commented 10 months ago

πŸ€–

Hi @ArslanKAS! Good to see you again. I hope you're doing well. Let's dive into this deprecation warning issue you've encountered.

Based on the information you've provided, it seems like you've already replaced all instances of run with invoke in your code, which is the correct step to take in response to the deprecation warning. However, the warning is still appearing, which suggests that the deprecated __call__ method might be used somewhere else in your code or in the libraries you're using.

Unfortunately, without the exact location or the function that is causing the LangChainDeprecationWarning, it's hard to provide a precise solution. The warning message usually includes the file and line number where the deprecated function is called. If you could provide that information, it would be easier to pinpoint the issue.

In the meantime, you can try to search for __call__ in your code and in the LangChain libraries you're using. If you find any instance of __call__, try to replace it with invoke or the appropriate method suggested in the deprecation warning.

Here's a general example of how you can do it:

# This is your original code
# obj is an instance of a class in LangChain
result = obj.__call__(arg1, arg2)

# This is how you can modify it
# alternative_call is the alternative function for __call__
result = obj.alternative_call(arg1, arg2)

Please replace alternative_call with the actual alternative function for __call__.

If you're still having trouble, please provide more information about the warning, such as the exact file and line number where the deprecated function is called. This will help in providing a more accurate solution.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

hwchase17 commented 10 months ago

I think you need to use invoke here as well:

        summary_refine = refine_chain({"input_documents": docs}, return_only_outputs=True)

eg

        summary_refine = refine_chain.invoke({"input_documents": docs})

Can you try that and let me know if that fixes it?

brockmartin commented 10 months ago

Seeing similar issue that worked for me.

psycho-baller commented 10 months ago

@ArbazBamboowala try this

ArslanKAS commented 10 months ago

I think you need to use invoke here as well:

        summary_refine = refine_chain({"input_documents": docs}, return_only_outputs=True)

eg

        summary_refine = refine_chain.invoke({"input_documents": docs})

Can you try that and let me know if that fixes it?

Thank you so much Sir Harrison. That worked. Just wanted to ask if these changes would be happening very often because we're using LangChain a lot with OpenAI for our products and if LangChain is going to change the method calls etc every now and then, then please can you make it simpler so we may change it easily in the entire project? Thanks once again.

ArslanKAS commented 10 months ago

response = chain( {"input_documents":docs, "question": user_question} , return_only_outputs=True)

As Sir Harrison pointed out you may need to change this: response = chain( {"input_documents":docs, "question": user_question} , return_only_outputs=True)

To this

response = chain.invoke( {"input_documents":docs, "question": user_question} )

brockmartin commented 10 months ago

response = chain( {"input_documents":docs, "question": user_question} , return_only_outputs=True)

As Sir Harrison pointed out you may need to change this: response = chain( {"input_documents":docs, "question": user_question} , return_only_outputs=True)

To this

response = chain.invoke( {"input_documents":docs, "question": user_question} )

This is what I did also which worked for me. @ArbazBamboowala.

ArbazHamidBamboowala commented 10 months ago

@ArbazBamboowala try this

Thank you so much, it worked for me :) Please can yall star my repositories. I am still looking for a job and would really help me with each one. I have been working hard on creating projects.. Thanks alot appreciate your help!!

dave-espinosa commented 10 months ago

@ArbazBamboowala try this

Worked like a charm, thanks! πŸ‘

ArbazHamidBamboowala commented 10 months ago

Well I got stuck on another issue, I added some login functionality and now I get blank page after running my streamlit application. It works without login code:

import streamlit as st
from PyPDF2 import PdfReader
from langchain.text_splitter import RecursiveCharacterTextSplitter
import os
from langchain_google_genai import GoogleGenerativeAIEmbeddings
import google.generativeai as genai
from langchain_community.vectorstores import FAISS
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.chains.question_answering import load_qa_chain
from langchain.prompts import PromptTemplate
from dotenv import load_dotenv
from googletrans import Translator
import pickle
from pathlib import Path

import streamlit_authenticator as stauth
# import pandas as pd

load_dotenv()
os.getenv("GOOGLE_API_KEY")
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))

def get_pdf_text(pdf_docs):
    text=""
    for pdf in pdf_docs:
        pdf_reader= PdfReader(pdf)
        for page in pdf_reader.pages:
            text+= page.extract_text()
    return  text

def get_text_chunks(text):
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
    chunks = text_splitter.split_text(text)
    return chunks

def get_vector_store(text_chunks):
    embeddings = GoogleGenerativeAIEmbeddings(model = "models/embedding-001")
    vector_store = FAISS.from_texts(text_chunks, embedding=embeddings)
    vector_store.save_local("faiss_index")

def get_conversational_chain():

    prompt_template = """
    Answer the question as detailed as possible from the provided context, make sure to provide all the details, if the answer is not in
    provided context just say, "answer is not available in the context", don't provide the wrong answer\n\n
    Context:\n {context}?\n
    Question: \n{question}\n

    Answer:
    """

    model = ChatGoogleGenerativeAI(model="gemini-pro",
                             temperature=0.3)

    prompt = PromptTemplate(template = prompt_template, input_variables = ["context", "question"])
    chain = load_qa_chain(model, chain_type="stuff", prompt=prompt)

    return chain

def user_input(user_question):
    embeddings = GoogleGenerativeAIEmbeddings(model = "models/embedding-001")
    translator = Translator()
    new_db = FAISS.load_local("faiss_index", embeddings)
    docs = new_db.similarity_search(user_question)

    chain = get_conversational_chain()

    response = chain.invoke(
        {"input_documents":docs, "question": user_question})

    # print(response)
    # st.write("Reply: ", response["output_text"])
    response_text_english = response["output_text"]
    response_text_original_language = translator.translate (response_text_english,
                                                           dest=translator.detect(response_text_english).lang).text
    print(response_text_original_language)
    st.write("Reply: ", response_text_original_language)

def main():
    st.write("Debug Point 1")

    st.set_page_config(page_title="Chat PDF", page_icon=':male-technologist:', initial_sidebar_state='collapsed')
    # --- USER AUTHENTICATION ---
    names = ["Peter Parker", "Rebecca Miller"]
    usernames = ["pparker", "rmiller"]

    # load hashed passwords
    file_path = Path(__file__).parent / "hashed_dw.pkl"
    with file_path.open("rb") as file:
        hashed_passwords = pickle.load(file)

    authenticator = stauth.Authenticate(names, usernames, hashed_passwords, "CHATBOT", "abcdef", cookie_expiry_days=30)

    name, authentication_status, usernames = authenticator.login("Login", "main")

    if authentication_status == False:
        st.error("Username/password is incorrect")

    if authentication_status == None:
        st.warning("Please enter your username and password")

    if authentication_status:
        st.write("Debug Point 4")

        st.header("Chat with PDF in Any Language you wantπŸ’")
        user_question = st.text_input("Ask a Question from the PDF Files")
        st.write("Debug Point 5")

        if user_question:
            user_input(user_question)
        with st.sidebar:
            authenticator.logout("Logout Now", "sidebar")
            st.sidebar.title(f"Welcome {name} to my chatbot")
            pdf_docs = st.file_uploader("Upload your PDF Files to chat with me and Click on the Submit & Process Button", accept_multiple_files=True)
            if st.button("Submit & Process, Thinking..."):
                with st.spinner("Taking time for thinking..."):
                    raw_text = get_pdf_text(pdf_docs)
                    text_chunks = get_text_chunks(raw_text)
                    get_vector_store(text_chunks)
                    st.success("Done")

    if __name__ == "__main__":
        main()
quiltedstars commented 7 months ago

I apologize if this shouldn't be posted, but I was having the same issue with a PDF reader application that didn't have the response=chain.invoke section, but it worked for mine under: result = qa_chain.invoke({"query": query}) In case anyone stumbles on this thread (like myself!)