langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
88.75k stars 13.95k forks source link

Invoking agent executor with Google GenerativeAI: AttributeError: module 'google.generativeai' has no attribute 'generate_content' #17251

Open KallieLev opened 5 months ago

KallieLev commented 5 months ago

Checked other resources

Example Code

from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import ReActSingleInputOutputParser
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import PromptTemplate

from langchain_google_genai import GoogleGenerativeAI

def build_executor(llm: BaseLanguageModel, prompt: PromptTemplate):
    llm_with_stop = llm.bind(stop=["\nObservation"])
    agent = (
        {
            "input": lambda x: x["input"],
            "agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
        }
        | prompt
        | llm_with_stop
        | ReActSingleInputOutputParser()
    )

    return AgentExecutor(agent=agent, tools=[])

llm = GoogleGenerativeAI(model='models/text-bison-001')
input_variables = ["input", "agent_scratchpad"]
prompt = PromptTemplate.from_file(
            "path/to/agent_template.txt", input_variables=input_variables
        )
prompt_template = prompt.partial(custom_prompt="")
executor = build_executor(llm, prompt_template)
print(executor.invoke(input={"input": "What are some of the pros and cons of Python as a programming language?"}))

This the prompt template I used -

you are an AI assistant, helping a Human with a task. The Human has asked you a question.
When you have a response to say to the Human, you MUST use the format:

Thought: Do I need to use a tool? No
Final Answer: [your response here]

Begin!

Previous conversation history:

New input: {input}
{agent_scratchpad}

Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
  File "/Users/kallie.levy/dev/repos/app-common/app_common/executor/build_executor.py", line 33, in <module>
    print(executor.invoke(input={"input": "What are some of the pros and cons of Python as a programming language?"}))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
    raise e
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1376, in _call
    next_step_output = self._take_next_step(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1102, in _take_next_step
    [
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1102, in <listcomp>
    [
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1130, in _iter_next_step
    output = self.agent.plan(
             ^^^^^^^^^^^^^^^^
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 392, in plan
    for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2424, in stream
    yield from self.transform(iter([input]), config, **kwargs)
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2411, in transform
    yield from self._transform_stream_with_config(
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1497, in _transform_stream_with_config
    chunk: Output = context.run(next, iterator)  # type: ignore
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2375, in _transform
    for output in final_pipeline:
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1035, in transform
    for chunk in input:
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4168, in transform
    yield from self.bound.transform(
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1045, in transform
    yield from self.stream(final, config, **kwargs)
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 414, in stream
    raise e
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 398, in stream
    for chunk in self._stream(
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_google_genai/llms.py", line 225, in _stream
    for stream_resp in _completion_with_retry(
                       ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_google_genai/llms.py", line 65, in _completion_with_retry
    return _completion_with_retry(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
           ^^^^^^^^^^^^
  File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_google_genai/llms.py", line 60, in _completion_with_retry
    return llm.client.generate_content(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'google.generativeai' has no attribute 'generate_content'. Did you mean: 'generate_text'?

Description

I'm creating an AgentExecutor with Google GenerativeAI llm, but since version 0.1.1 of langchain, I receive this error. If langchain <= 0.1.0, this script works.

System Info

python==3.11.6
langchain==0.1.1
langchain-community==0.0.19
langchain-core==0.1.21
langchain-google-genai==0.0.5
langchain-google-vertexai==0.0.3
dosubot[bot] commented 5 months ago

🤖

The error you're encountering is likely due to a change in the LangChain framework or the Google GenerativeAI module between versions. In the provided context, there is no mention of a 'generate_content' attribute in the 'google.generativeai' module.

The GoogleGenerativeAI class in LangChain uses the 'invoke' method to generate text, as shown in the example:

from langchain_google_genai import GoogleGenerativeAI

llm = GoogleGenerativeAI(model="gemini-pro")
llm.invoke("Once upon a time, a library called LangChain")

If you're trying to use a 'generate_content' method, it might have been deprecated or renamed in the newer versions of LangChain or the Google GenerativeAI module. You should check the documentation or source code of these modules to verify the available methods and their usage.

If 'generate_content' was a method in an older version of LangChain or Google GenerativeAI, you might need to update your script to use the 'invoke' method instead, or any other method that replaces 'generate_content' in the newer versions.

Regarding the changes between LangChain versions 0.1.0 and 0.1.1, the AttributeError could have been caused by the relocation of the "create_spark_dataframe_agent" from the main LangChain package to the experimental package. In version 0.1.1, an ImportError is raised when trying to access this agent, indicating that it has been moved to "langchain_experimental".

To resolve this, you need to install the langchain experimental package and update your import statement from the old path to the new path. The old path is "langchain." + here + "." + name and the new path is "langchain_experimental." + here + "." + name.

Here is the relevant code snippet:

if name == "create_spark_dataframe_agent":
    HERE = Path(__file__).parents[3]
    here = as_import_path(Path(__file__).parent, relative_to=HERE)

    old_path = "langchain." + here + "." + name
    new_path = "langchain_experimental." + here + "." + name
    raise ImportError(
        "This agent has been moved to langchain experiment. "
        "This agent relies on python REPL tool under the hood, so to use it "
        "safely please sandbox the python REPL. "
        "Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md "
        "and https://github.com/langchain-ai/langchain/discussions/11680"
        "To keep using this code as is, install langchain experimental and "
        f"update your import statement from:\n `{old_path}` to `{new_path}`."
    )

You can find more details in the LangChain repository.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
dudesparsh commented 5 months ago

Hi @KallieLev ,

I think this is not exactly the issue with Langchain, but rather a wide issue with everyone building on Google APIs. So earlier, for using Google's Palm model the methods used to be different. However with the introduction of Gemini model APIs, Google has updated its method and the way we interact with these LLMs.

Based on what you have mentioned above, I believe these methods are available for Google Gemini models, but not for Palm. In that case either you can try using Gemini models with the above code, or try to update/use the correct LLM wrapper for Palm API's.

In case if the above is the right diagnosis, I can take up and work on this issue further.

KallieLev commented 4 months ago

I believe this is an issue with Langchain since from a certain Langchain version this functionality doesn't work anymore, while Google's package hasn't changed. This means that the Langchain usage has been broken with Google's package since version 0.1.1. Also, if there's an incompatibility between Langchain and certain Google models, I would have expected this to be mentioned in the docs.