langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.59k stars 15.31k forks source link

ChatOllama Fails Pydantic Model validations And is not able to be used as LangChain agent. #21299

Closed umairgillani93 closed 1 month ago

umairgillani93 commented 6 months ago

Checked other resources

Example Code

import os
import sys
from langchain_community.llms import Ollama
from langchain import hub
from langchain.prompts import (
        ChatPromptTemplate,
        HumanMessagePromptTemplate,
        MessagesPlaceholder
        )
from langchain_community.chat_models.ollama import ChatOllama 
from langchain.agents import AgentExecutor, OpenAIFunctionsAgent
from langchain.agents.agent_types import AgentType
from langchain.agents.initialize import initialize_agent
from langchain.agents import create_tool_calling_agent
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain.agents import AgentExecutor
from lib import create_openai_functions_agent
from langchain.schema import SystemMessage
from sql import run_query

print("[**] Import Successful")

# create a chat model
chat = ChatOllama()

# create a prompt
prompt = ChatPromptTemplate(
        messages = [
            #SystemMessage((content = f"you are an AI that has access to a SQLite database.\n" 
            #    f"The database has tables of: {tables}\n"
            #    "Do not make any assumptions about what table exist "
            #    "or what columns exist. Instead, use the 'describe_table' function")),
            HumanMessagePromptTemplate.from_template("{input}"),
            MessagesPlaceholder(variable_name="agent_scratchpad")
            ]
        )

# prompt = hub.pull("hwchase17/openai-functions-agent")

# creating agent
# agent = create_openai_functions_agent(
#         llm = chat,
#         tools = tools,
#         prompt = prompt
#         )

# create tools
tools = [run_query]

agent = create_openai_functions_agent(
        llm = chat,
        tools = tools,
        prompt = prompt
        #verbose = True
        )

print(f'Agent type: {type(agent)}')

agent_executor = AgentExecutor(agent = agent,
        tools = tools)

print(agent_executor.invoke({"input": "how many user have first name 'David' in the table"}))

Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
  File "/home/umairgillani/github/rag-modeling/langchain/query-engine/app.py", line 53, in <module>
    agent = create_openai_functions_agent(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: create_openai_functions_agent() got an unexpected keyword argument 'verbose'
(langchain) umairgillani@fcp query-engine$ vim app.py 
(langchain) umairgillani@fcp query-engine$ python app.py 
[**] Import Successful
Agent type: <class 'langchain_core.runnables.base.RunnableSequence'>
Traceback (most recent call last):
  File "/home/umairgillani/github/rag-modeling/langchain/query-engine/app.py", line 62, in <module>
    agent_executor = AgentExecutor(agent = agent,
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/umairgillani/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/umairgillani/anaconda3/envs/langchain/lib/python3.12/site-packages/pydantic/v1/main.py", line 339, in __init__
    values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/umairgillani/anaconda3/envs/langchain/lib/python3.12/site-packages/pydantic/v1/main.py", line 1100, in validate_model
    values = validator(cls_, values)
             ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/umairgillani/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain/agents/agent.py", line 980, in validate_tools
    tools = values["tools"]
            ~~~~~~^^^^^^^^^
KeyError: 'tools'

Description

So what's happening up here is I'm trying to use ChatOllama as a langchain agent, but it looks like my code breaks as soon as I try to create an instance of AgentExecutor class using model as ChatOllama.

I investigated the langchain code-base and it looks like unlike "ChatOpenAI", "ChatOllama" is not compatible with AgentExecutor. As AgentExecutor calls Pydantic pipeline data and validations and ensures that it receives a Pydantic "BaseModel" as an argument, as you can see from above error message, it broke on validate_model function above:

values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)

One more thing to notice here is it's successfully creating "agent" instance and returns the required output "RunnalbeSequence" . Agent type: <class 'langchain_core.runnables.base.RunnableSequence'>

But the issue is with pydantic method -> model_validation as it doesn't allow, to run the flow. Somehow If you pass the validation_check in pydantic for ChatOllama we can use a free of cost model as an agent > and create advance LLM applications.

System Info

System Information
------------------
> OS:  Linux
> OS Version:  #29~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr  4 14:39:20 UTC 2
> Python Version:  3.12.3 | packaged by Anaconda, Inc. | (main, Apr 19 2024, 16:50:38) [GCC 11.2.0]

Package Information
-------------------
> langchain_core: 0.1.46
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.51
> langchain_experimental: 0.0.57
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15

Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:

> langgraph
> langserve
nsandell123 commented 6 months ago

You spelled RunnableSequence as RunnalbeSequence.

eyurtsev commented 6 months ago

Is ollama supporting tools?

umairgillani93 commented 6 months ago

Well, apparently ChatOllama successfully creates a agent instance when passed as an argument to create_openai_functions_agent. Here's the output, you can check it returns "RunnableSequence", and doesn't complain about argument tools and also shows tools information in response object.

middle=[ChatPromptTemplate(input_variables=['agent_scratchpad', 'input'], input_types={'agent_scratchpad': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), MessagesPlaceholder(variable_name='agent_scratchpad')])] last=RunnableBinding(bound=ChatOllama(), kwargs={'functions': [{'name': 'run_query', 'description': 'runs the passed query\nand returns the result.', 'parameters': {'type': 'object', 'properties': {}, 'required': ['query']}}]})

The issue arises when the agent is executed calling invoke method on it, as AgentExecutor expects agent to be Union[BaseSingleActionAgent, BaseMultiActionAgent] but our agent is not either of those. So far it hasn't thrown any error specifically related to "tools" but I was reading out the issues and stumbled upon an issue that said, "custom tools are not working with other than ChatGpt models".

I'm trying to write a custom Wrapper that inherits from BaseSingleActionAgent and has all the required method like "input_keys", "plan" etc implemented, and uses ChatOllama for an agent.

lalanikarim commented 4 months ago

Take a look at #22339 which should have addressed this issue. The PR was approved and merged yesterday but a release is yet to be cut from it and should happen in the next few days.

In the meantime, you may try and install langchain-experimental directly from langchain's source like this:

pip install git+https://github.com/langchain-ai/langchain.git\#egg=langchain-experimental\&subdirectory=libs/experimental

I hope this helps.