langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.74k stars 15.55k forks source link

ChatGoogleGenerativeAI: **TypeError** when using @tool decorated methods to perform Tool Calling using Gimini. #26083

Open AkashBais opened 3 months ago

AkashBais commented 3 months ago

Checked other resources

Example Code

from langchain_google_genai import ChatGoogleGenerativeAI
model_name = "gemini-1.5-flash-001" #"gemini-1.5-pro-latest"
llm = ChatGoogleGenerativeAI(model=model_name,
                              google_api_key=os.getenv('GOOGLE_API_KEY_2'),
                              max_output_tokens = 1024,
                              temperature = 0,
                              verbose=False,
                              )

llm_with_tools = llm.bind(functions = [search_web, split_documents ] )
responce = llm_with_tools.invoke("Search for LangChain on DuckDuckGo")

Tool definitions

@tool(parse_docstring=True)
def split_documents(
    chunk_size: int,
    knowledge_base: Annotated[Union[List[LangchainDocument], LangchainDocument], InjectedToolArg],
    chunk_overlap: Optional[int] = None,
    tokenizer_name: Annotated[Optional[str], InjectedToolArg] = config.EMBEDDING_MODEL_NAME
) -> List[LangchainDocument]:

    """
    This method is an implimentation of Text splitter that uses HuggingFace tokenizer to count length.
    This method is to be called for chuncking a Langchain Document/List of Langchain Documents.
    Returns a list of chuncked LangChain Document(s)

    Args:
      chunk_size: "Size of Chunks to be created, in number of tokens.Depends on the Context window length of the Embedding model"
      knowledge_base: "List of Langchain Document(s) to process. To be passed at run time
      chunk_overlap: "Size of overlap between Chunks, in number of tokens"
      tokenizer_name: "Name of the tokanizer model to be used for tokanization before chunking the Langchain Document(s) in knowledge_base"
    """
   # Tool Code

@tool(parse_docstring=True)
def search_web(
    query:str,
    engine:Optional[str]="Google",
    num_results:Optional[int]=5,
    truncate_threshold:Optional[int]=None,
) -> List[LangchainDocument]:

    """
    Performs web search for the passed query using the desired search engine's API.
    This function will then use web scraping to get page content and return them as a list of LangChain documents.

    Args:
      query:"Query to perform Web search for"
      engine: "The search engine to use for the web search"
      num_results: "The number of search results to return from web search"
      truncate_threshold: "Threshold in number of characters to truncate each web pages' content"
    """

   # Tool Code

Error Message and Stack Trace (if applicable)

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-12-af095bbdae0b>](https://localhost:8080/#) in <cell line: 1>()
----> 1 responce = llm_with_tools.invoke("Search for LangChain on DuckDuckGo")

10 frames
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
   5090         **kwargs: Optional[Any],
   5091     ) -> Output:
-> 5092         return self.bound.invoke(
   5093             input,
   5094             self._merge_configs(config),

[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in invoke(self, input, config, stop, **kwargs)
    275         return cast(
    276             ChatGeneration,
--> 277             self.generate_prompt(
    278                 [self._convert_input(input)],
    279                 stop=stop,

[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate_prompt(self, prompts, stop, callbacks, **kwargs)
    775     ) -> LLMResult:
    776         prompt_messages = [p.to_messages() for p in prompts]
--> 777         return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
    778 
    779     async def agenerate_prompt(

[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    632                 if run_managers:
    633                     run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 634                 raise e
    635         flattened_outputs = [
    636             LLMResult(generations=[res.generations], llm_output=res.llm_output)  # type: ignore[list-item]

[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    622             try:
    623                 results.append(
--> 624                     self._generate_with_cache(
    625                         m,
    626                         stop=stop,

[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in _generate_with_cache(self, messages, stop, run_manager, **kwargs)
    844         else:
    845             if inspect.signature(self._generate).parameters.get("run_manager"):
--> 846                 result = self._generate(
    847                     messages, stop=stop, run_manager=run_manager, **kwargs
    848                 )

[/usr/local/lib/python3.10/dist-packages/langchain_google_vertexai/chat_models.py](https://localhost:8080/#) in _generate(self, messages, stop, run_manager, stream, **kwargs)
   1163         if not self._is_gemini_model:
   1164             return self._generate_non_gemini(messages, stop=stop, **kwargs)
-> 1165         return self._generate_gemini(
   1166             messages=messages,
   1167             stop=stop,

[/usr/local/lib/python3.10/dist-packages/langchain_google_vertexai/chat_models.py](https://localhost:8080/#) in _generate_gemini(self, messages, stop, run_manager, **kwargs)
   1319         **kwargs: Any,
   1320     ) -> ChatResult:
-> 1321         request = self._prepare_request_gemini(messages=messages, stop=stop, **kwargs)
   1322         response = _completion_with_retry(
   1323             self.prediction_client.generate_content,

[/usr/local/lib/python3.10/dist-packages/langchain_google_vertexai/chat_models.py](https://localhost:8080/#) in _prepare_request_gemini(self, messages, stop, stream, tools, functions, tool_config, safety_settings, cached_content, tool_choice, **kwargs)
   1234     ) -> GenerateContentRequest:
   1235         system_instruction, contents = _parse_chat_history_gemini(messages)
-> 1236         formatted_tools = self._tools_gemini(tools=tools, functions=functions)
   1237         if tool_config:
   1238             tool_config = self._tool_config_gemini(tool_config=tool_config)

[/usr/local/lib/python3.10/dist-packages/langchain_google_vertexai/chat_models.py](https://localhost:8080/#) in _tools_gemini(self, tools, functions)
   1375             )
   1376         if tools:
-> 1377             return [_format_to_gapic_tool(tools)]
   1378         if functions:
   1379             return [_format_to_gapic_tool(functions)]

[/usr/local/lib/python3.10/dist-packages/langchain_google_vertexai/functions_utils.py](https://localhost:8080/#) in _format_to_gapic_tool(tools)
    214             ):
    215                 fd = _format_to_gapic_function_declaration(tool)
--> 216                 gapic_tool.function_declarations.append(fd)
    217                 continue
    218             # _ToolDictLike

TypeError: Parameter to MergeFrom() must be instance of same class: expected google.cloud.aiplatform.v1beta1.FunctionDeclaration got FunctionDeclaration.

Description

I am trying to use ChatGoogleGenerativeAI for tool calling and Agentic AI implimentation. When exicuting the below code I expect it to call search_web with appropriate arguments. responce = llm_with_tools.invoke("Search for LangChain on DuckDuckGo")

But instead it give me the following error. TypeError: Parameter to MergeFrom() must be instance of same class: expected google.cloud.aiplatform.v1beta1.FunctionDeclaration got FunctionDeclaration.

I am using langchain's @tool decorator to create these tools. I assume this should be compatible internally

System Info

System Information

OS: Linux OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024 Python Version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]

Package Information

langchain_core: 0.2.38 langchain: 0.2.15 langchain_community: 0.2.15 langsmith: 0.1.114 langchain_google_genai: 1.0.10 langchain_google_vertexai: 1.0.10 langchain_groq: 0.1.9 langchain_text_splitters: 0.2.4

Optional packages not installed

langgraph langserve

Other Dependencies

aiohttp: 3.10.5 anthropic[vertexai]: Installed. No version info available. async-timeout: 4.0.3 dataclasses-json: 0.6.7 google-cloud-aiplatform: 1.65.0 google-cloud-storage: 2.18.2 google-generativeai: 0.7.2 groq: 0.11.0 httpx: 0.27.2 httpx-sse: 0.4.0 jsonpatch: 1.33 langchain-mistralai: Installed. No version info available. numpy: 1.26.4 orjson: 3.10.7 packaging: 24.1 pillow: 10.4.0 pydantic: 2.8.2 PyYAML: 6.0.2 requests: 2.32.3 SQLAlchemy: 2.0.34 tenacity: 8.5.0 typing-extensions: 4.12.2

AtmehEsraa commented 2 months ago

did you find any solution?

baskaryan commented 1 month ago

do you still see this issue if you use bind_tools (the recommended way to pass in tools)

llm_with_tools = llm.bind_tools([search_web, split_documents ])