langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
92.31k stars 14.75k forks source link

Can't use tool decorators with OllamaFunctions #22191

Open Docteur-RS opened 3 months ago

Docteur-RS commented 3 months ago

Checked other resources

Example Code

from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_core.prompts.chat import ChatPromptTemplate, MessagesPlaceholder

@tool
def get_word_length(word: str) -> int:
    """Returns the length of a word"""
    return len(word)

tools = [get_word_length]

prompt = ChatPromptTemplate.from_messages([
            ("system", "You are a helpfull assistant"),
            ("human", "{input}"),
            MessagesPlaceholder("agent_scratchpad")
])

model = OllamaFunctions(model="mistral:7b-instruct-v0.3-q8_0", format="json")

agent = create_tool_calling_agent(model, tools, prompt)

agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

result = agent_executor.invoke({
    "input": "How many letters are in 'orange' ?"
})

print(result["output"])

Following the code presented in this langchain video : https://www.youtube.com/watch?v=zCwuAlpQKTM&ab_channel=LangChain the LLM should be able to call the tool get_word_length.

Error Message and Stack Trace (if applicable)

Entering new AgentExecutor chain... Traceback (most recent call last): File "/home/linuxUsername/crewai/test2.py", line 66, in result = agent_executor.invoke({ File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in invoke raise e File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1433, in _call next_step_output = self._take_next_step( File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1139, in _take_next_step [ File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1139, in [ File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1167, in _iter_next_step output = self.agent.plan( File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 515, in plan for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}): File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2769, in stream yield from self.transform(iter([input]), config, kwargs) File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2756, in transform yield from self._transform_stream_with_config( File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1772, in _transform_stream_with_config chunk: Output = context.run(next, iterator) # type: ignore File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2720, in _transform for output in final_pipeline: File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1148, in transform for ichunk in input: File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4638, in transform yield from self.bound.transform( File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1166, in transform yield from self.stream(final, config, kwargs) File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 265, in stream raise e File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 245, in stream for chunk in self._stream(messages, stop=stop, kwargs): File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 317, in _stream for stream_resp in self._create_chat_stream(messages, stop, kwargs): File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 162, in _create_chat_stream yield from self._create_stream( File "/home/linuxUsername/.local/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 231, in _create_stream response = requests.post( File "/home/linuxUsername/.local/lib/python3.10/site-packages/requests/api.py", line 115, in post return request("post", url, data=data, json=json, kwargs) File "/home/linuxUsername/.local/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, kwargs) File "/home/linuxUsername/.local/lib/python3.10/site-packages/requests/sessions.py", line 575, in request prep = self.prepare_request(req) File "/home/linuxUsername/.local/lib/python3.10/site-packages/requests/sessions.py", line 484, in prepare_request p.prepare( File "/home/linuxUsername/.local/lib/python3.10/site-packages/requests/models.py", line 370, in prepare self.prepare_body(data, files, json) File "/home/linuxUsername/.local/lib/python3.10/site-packages/requests/models.py", line 510, in prepare_body body = complexjson.dumps(json, allow_nan=False) File "/usr/lib/python3.10/json/init.py", line 238, in dumps **kw).encode(obj) File "/usr/lib/python3.10/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "/usr/lib/python3.10/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.class.name} ' TypeError: Object of type StructuredTool is not JSON serializable

Description

I should get the number of letters in the word "orange" as the tool should return it's length. Instead I get an exception about the tool.
Note that doing the following does return a correct JSON tool call.

from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_core.prompts.chat import ChatPromptTemplate, MessagesPlaceholder

chatModel = OllamaFunctions(model="mistral:7b-instruct-v0.3-q8_0", format="json")

def get_current_weather(some_param):
    print("got", str(some_param))

model = chatModel.bind_tools(
    tools=[
        {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, " "e.g. San Francisco, CA",
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                    },
                },
                "required": ["location"],
            },
        }
    ],
    function_call={"name": "get_current_weather"},
)

from langchain_core.messages import HumanMessage
answer = model.invoke("what is the weather in Boston?")

print(answer.content)
print( answer.additional_kwargs["function_call"])

It seems to me that there is an incompatibility with the way the decorator is creating the pydantic definition. But it's just a guess.

System Info

pip freeze | grep langchain
langchain==0.2.1
langchain-cohere==0.1.5
langchain-community==0.2.1
langchain-core==0.2.1
langchain-experimental==0.0.59
langchain-openai==0.0.5
langchain-text-splitters==0.2.0

Using

hega4444 commented 3 months ago

Found a possible solution to my problem (I may have to fork and propose some changes which I never did before). In ollama_functiony.py:


def convert_to_ollama_tool(tool: Any) -> Dict:
    """Convert a tool to an Ollama tool."""

    if _is_pydantic_class(tool.__class__):
        schema = tool.__dict__["args_schema"].schema()
        definition = {"name": tool.name, "properties": schema["properties"]}
        if "required" in schema:
            definition["required"] = schema["required"]

        return definition
    raise ValueError(
        f"Cannot convert {tool} to an Ollama tool. {tool} needs to be a Pydantic model."
    )

this solves the reference to the function schema. Good luck!

rajivmehtaflex commented 3 months ago

Found a possible solution to my problem (I may have to fork and propose some changes which I never did before). In ollama_functiony.py:


def convert_to_ollama_tool(tool: Any) -> Dict:
    """Convert a tool to an Ollama tool."""

    if _is_pydantic_class(tool.__class__):
        schema = tool.__dict__["args_schema"].schema()
        definition = {"name": tool.name, "properties": schema["properties"]}
        if "required" in schema:
            definition["required"] = schema["required"]

        return definition
    raise ValueError(
        f"Cannot convert {tool} to an Ollama tool. {tool} needs to be a Pydantic model."
    )

this solves the reference to the function schema. Good luck!

Thanks ,you saved time from troubleshoot the issue.

DiaQusNet commented 3 months ago

Found a possible solution to my problem (I may have to fork and propose some changes which I never did before). In ollama_functiony.py:


def convert_to_ollama_tool(tool: Any) -> Dict:
    """Convert a tool to an Ollama tool."""

    if _is_pydantic_class(tool.__class__):
        schema = tool.__dict__["args_schema"].schema()
        definition = {"name": tool.name, "properties": schema["properties"]}
        if "required" in schema:
            definition["required"] = schema["required"]

        return definition
    raise ValueError(
        f"Cannot convert {tool} to an Ollama tool. {tool} needs to be a Pydantic model."
    )

this solves the reference to the function schema. Good luck!

Thanks ,you saved time from troubleshoot the issue.

Did you fixed the problem? In my case, still output TypeError: Object of type StructuredTool is not JSON serializable

rajivmehtaflex commented 3 months ago
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_core.prompts.chat import ChatPromptTemplate, MessagesPlaceholder

I'm using --> Tool Prompting

phbergsmann commented 3 months ago

I'm facing a similar issue. My error is TypeError: Object of type Tool is not JSON serializable

The proposed change in ollama_functiony.py did not resolve my issue.

danielp370 commented 3 months ago

works for me (make sure you are running the latest) eg: pip install --force-reinstall -U git+https://github.com/lalanikarim/langchain.git@convert-to-ollama-tool\#egg=langchain_experimental\&subdirectory=libs/experimental

phbergsmann commented 3 months ago

Thanks a lot for your comment! I tried that without success. Here are more details about my scenario:

The error is still TypeError: Object of type Tool is not JSON serializable

This is the last line of the trace:

  File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/json/encoder.py", line 180, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '

Code Snippet (only relevant parts)

llm = OllamaFunctions(
    model="llama3:instruct",
    base_url=os.environ.get('OLLAMA_BASE_URL'),
)

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", agent_purpose),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad")
    ]
)

vectorstore = PGVector(
    embeddings=embeddings,
    collection_name=collection_name,
    connection=connection,
    use_jsonb=True,
)

retriever = vectorstore.as_retriever()

retriever_tool = create_retriever_tool(
    retriever=retriever,
    name="knowledge_base",
    description="###Description of the KB###"
)

agent_tools = [retriever_tool]

agent = create_tool_calling_agent(llm, agent_tools, prompt)

agent_executor = AgentExecutor(
    agent=agent,
    tools=agent_tools,
    verbose=True,
    handle_parsing_errors=False,
    memory=memory,
)

System Info

Python Libs

langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-experimental @ git+https://github.com/lalanikarim/langchain.git@1b5d77f3c7dd3f2fe6f0454352c5494982114718#subdirectory=libs/experimental
langchain-postgres==0.0.6
langchain-text-splitters==0.2.0
langchainhub==0.1.17

Environment

Python 3.12.3
ollama 0.1.40
Local model is llama3:instruct
ferasawadi commented 3 months ago

am trying to build an agent using this code:

import asyncio

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_experimental.llms.ollama_functions import OllamaFunctions

from custom_tools.Clinic_Tools import book_teeth_fix_appointment
from local_env import DEFAULT_MODULE, LLM_BASE_URL

tools = [book_teeth_fix_appointment]
llm = OllamaFunctions(model=DEFAULT_MODULE, format="json", temperature=1, base_url=LLM_BASE_URL)
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpfull assistant"),
    ("human", "{input}"),
    MessagesPlaceholder("agent_scratchpad")
])
llm_with_tools = llm.bind_tools(tools)

async def chat():
    query = "can you book an appointment for ferasawady@gmail.com, +971507502506"
    async for chunk in llm_with_tools.astream(query):
        print(chunk.tool_call_chunks)

asyncio.run(chat())

am getting this error:

    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type StructuredTool is not JSON serializable

I tried works for me (make sure you are running the latest) eg: pip install --force-reinstall -U git+https://github.com/lalanikarim/langchain.git@convert-to-ollama-tool#egg=langchain_experimental\&subdirectory=libs/experimental

but didn't work and am not sure how to use this:


def convert_to_ollama_tool(tool: Any) -> Dict:
    """Convert a tool to an Ollama tool."""

    if _is_pydantic_class(tool.__class__):
        schema = tool.__dict__["args_schema"].schema()
        definition = {"name": tool.name, "properties": schema["properties"]}
        if "required" in schema:
            definition["required"] = schema["required"]

        return definition
    raise ValueError(
        f"Cannot convert {tool} to an Ollama tool. {tool} needs to be a Pydantic model."
    )
ferasawadi commented 3 months ago

well this now seems to be working:


import asyncio
from typing import Any, Dict

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_experimental.llms.ollama_functions import OllamaFunctions, _is_pydantic_class

from custom_tools.Clinic_Tools import book_teeth_fix_appointment
from local_env import DEFAULT_MODULE, LLM_BASE_URL

def convert_to_ollama_tool(tool: Any) -> Dict:
    """Convert a tool to an Ollama tool."""

    if _is_pydantic_class(tool.__class__):
        schema = tool.__dict__["args_schema"].schema()
        definition = {"name": tool.name, "properties": schema["properties"]}
        if "required" in schema:
            definition["required"] = schema["required"]

        return definition
    raise ValueError(
        f"Cannot convert {tool} to an Ollama tool. {tool} needs to be a Pydantic model."
    )

tools = [convert_to_ollama_tool(book_teeth_fix_appointment)]
llm = OllamaFunctions(model=DEFAULT_MODULE, format="json", temperature=1, base_url=LLM_BASE_URL)
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpfull assistant"),
    ("human", "{input}"),
    MessagesPlaceholder("agent_scratchpad")
])
llm_with_tools = llm.bind_tools(tools)

async def chat():
    query = "can you book an appointment for ferasawady@gmail.com, +971507502506"
    async for chunk in llm_with_tools.astream(query):
        print(chunk)

asyncio.run(chat())

but not sure about the results:

/Users/ferasalawadi/PycharmProjects/wafir-food-langchaing-agent-service/.venv/bin/python /Users/ferasalawadi/PycharmProjects/wafir-food-langchaing-agent-service/tools_llm.py 
content='{}' id='run-bf45de25-7a1b-4687-a2b0-675689b6ba25'
content='' response_metadata={'model': 'llama3', 'created_at': '2024-06-08T11:19:07.243883159Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 526792016, 'load_duration': 1267804, 'prompt_eval_duration': 197010000, 'eval_count': 2, 'eval_duration': 194341000} id='run-bf45de25-7a1b-4687-a2b0-675689b6ba25'

Process finished with exit code 0
Docteur-RS commented 3 months ago

I cant get pip to install the forked proposed by @danielp370.

Clean env:

python3 -m venv test_venv
source ./test_venv/bin/activate

Installing:

pip install --force-reinstall -U git+https://github.com/lalanikarim/langchain.git@convert-to-ollama-tool#egg=langchain_experimental&subdirectory=libs/experimental
[1] 63556
(test_venv) xxxx@DESKTOP-xxxx~/function_calling$ Collecting langchain_experimental
  Cloning https://github.com/lalanikarim/langchain.git (to revision convert-to-ollama-tool) to /tmp/pip-install-9zxi_x3x/langchain-experimental_bf3a21839450487da2b84897f36f9051
  Running command git clone --filter=blob:none --quiet https://github.com/lalanikarim/langchain.git /tmp/pip-install-9zxi_x3x/langchain-experimental_bf3a21839450487da2b84897f36f9051
  Running command git checkout -b convert-to-ollama-tool --track origin/convert-to-ollama-tool
  Switched to a new branch 'convert-to-ollama-tool'
  Branch 'convert-to-ollama-tool' set up to track remote branch 'convert-to-ollama-tool' from 'origin'.
  Resolved https://github.com/lalanikarim/langchain.git to commit 1b5d77f3c7dd3f2fe6f0454352c5494982114718
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error

  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [14 lines of output]
      error: Multiple top-level packages discovered in a flat-layout: ['libs', 'docker', 'cookbook', 'templates'].

      To avoid accidental inclusion of unwanted files or directories,
      setuptools will not proceed with this build.

      If you are trying to create a single distribution with multiple packages
      on purpose, you should not rely on automatic discovery.
      Instead, consider the following options:

      1. set up custom discovery (`find` directive with `include` or `exclude`)
      2. use a `src-layout`
      3. explicitly set `py_modules` or `packages` with a list of names

      To find more information, look for "package discovery" on setuptools docs.
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

This is a python problem and not directly related to the current issue. However if someone has a complete list of commands that works with a good win rate I would love to have have a look at it ^^.

Let's hope langchain fixes the original issue rapidely though. As I think that a lot of us are waiting on this for reliable Function Calling with local LLMs.

at421 commented 3 months ago

@lalanikarim Have you seen this issue it appears to also affect your cookbook LangGraph Implementation, would be great have any input on how to prevent this error. Thanks!

lalanikarim commented 3 months ago

@at421 @danielp370 @Docteur-RS @ferasawadi

Take a look at #22339 which should have addressed this issue. The PR was approved and merged yesterday but a release is yet to be cut from it and should happen in the next few days.

In the meantime, you may try and install langchain-experimental directly from langchain's source like this:

pip install git+https://github.com/langchain-ai/langchain.git\#egg=langchain-experimental\&subdirectory=libs/experimental

I hope this helps.

lalanikarim commented 3 months ago

Take a look at #22339 which should have addressed this issue. The PR was approved and merged yesterday but a release is yet to be cut from it and should happen in the next few days.

In the meantime, you may try and install langchain-experimental directly from langchain's source like this:

pip install git+https://github.com/langchain-ai/langchain.git\#egg=langchain-experimental\&subdirectory=libs/experimental

I hope this helps.

Docteur-RS commented 2 months ago

@lalanikarim It works. Thx.

Example:

import operator
from datetime import datetime
from typing import Annotated, TypedDict, Union
from langchain import hub
from langchain.agents import create_react_agent
from langchain_community.chat_models import ChatOllama
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.messages import BaseMessage
from langchain_core.tools import tool
from langgraph.graph import END, StateGraph
from langgraph.prebuilt import ToolExecutor, ToolInvocation

@tool
def get_now(format: str = "%Y-%m-%d %H:%M:%S"):
    """
    Get the current time
    """
    return datetime.now().strftime(format)

tools = [get_now]
tool_executor = ToolExecutor(tools)

class AgentState(TypedDict):
    input: str
    chat_history: list[BaseMessage]
    agent_outcome: Union[AgentAction, AgentFinish, None]
    intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]

model = ChatOllama(model="mistral:7b-instruct-v0.3-q8_0")
prompt = hub.pull("hwchase17/react")

agent_runnable = create_react_agent(model, tools, prompt)

def execute_tools(state):
    print("Called `execute_tools`")
    messages = [state["agent_outcome"]]
    last_message = messages[-1]

    tool_name = last_message.tool

    print(f"Calling tool: {tool_name}")

    action = ToolInvocation(
        tool=tool_name,
        tool_input=last_message.tool_input,
    )
    response = tool_executor.invoke(action)
    return {"intermediate_steps": [(state["agent_outcome"], response)]}

def run_agent(state):
    agent_outcome = agent_runnable.invoke(state)
    return {"agent_outcome": agent_outcome}

def should_continue(state):
    messages = [state["agent_outcome"]]
    last_message = messages[-1]
    if "Action" not in last_message.log:
        return "end"
    else:
        return "continue"

workflow = StateGraph(AgentState)
workflow.add_node("agent", run_agent)
workflow.add_node("action", execute_tools)
workflow.set_entry_point("agent")
workflow.add_conditional_edges(
    "agent", should_continue, {"continue": "action", "end": END}
)
workflow.add_edge("action", "agent")
app = workflow.compile()

input_text = "Whats the current time?"
inputs = {"input": input_text, "chat_history": []}
results = []
for s in app.stream(inputs):
    result = list(s.values())[0]
    results.append(result)
    print(result)

Result:

{'agent_outcome': AgentAction(tool='get_now', tool_input="'%Y-%m-%d %H:%M:%S'", log=" I need to get the current time.\nAction: get_now\nAction Input: '%Y-%m-%d %H:%M:%S'")}
Called `execute_tools`
Calling tool: get_now
{'intermediate_steps': [(AgentAction(tool='get_now', tool_input="'%Y-%m-%d %H:%M:%S'", log=" I need to get the current time.\nAction: get_now\nAction Input: '%Y-%m-%d %H:%M:%S'"), "'2024-06-18 18:18:50'")]}
{'agent_outcome': AgentFinish(return_values={'output': 'The current time is June 18, 2024, 18:18:50.'}, log=' The current time is June 18, 2024, 18:18:50.\nFinal Answer: The current time is June 18, 2024, 18:18:50.')}

However I cant get it to work 100% of the time.

For instance:

{'agent_outcome': AgentAction(tool="get_now(format='%Y-%m-%d %H:%M:%S')", tool_input='None', log=" To find the current time, I'll use the `get_now()` function. The format will be '%Y-%m-%d %H:%M:%S', which represents year-month-day hour:minute:second.\n\nAction: get_now(format='%Y-%m-%d %H:%M:%S')\nAction Input: None")}
Called `execute_tools`
Calling tool: get_now(format='%Y-%m-%d %H:%M:%S')
{'intermediate_steps': [(AgentAction(tool="get_now(format='%Y-%m-%d %H:%M:%S')", tool_input='None', log=" To find the current time, I'll use the `get_now()` function. The format will be '%Y-%m-%d %H:%M:%S', which represents year-month-day hour:minute:second.\n\nAction: get_now(format='%Y-%m-%d %H:%M:%S')\nAction Input: None"), "get_now(format='%Y-%m-%d %H:%M:%S') is not a valid tool, try one of [get_now].")]}
{'agent_outcome': AgentAction(tool="get_now(format='%Y-%m-%d %H:%M:%S')", tool_input='None', log=" It seems there was a mistake in the provided function syntax. Here's how I would answer the question using the `get_now()` function as stated earlier:\n\nThought: To find the current time, I'll use the `get_now()` function with the format specified.\nAction: get_now(format='%Y-%m-%d %H:%M:%S')\nAction Input: None")}
Called `execute_tools`
Calling tool: get_now(format='%Y-%m-%d %H:%M:%S')
{'intermediate_steps': [(AgentAction(tool="get_now(format='%Y-%m-%d %H:%M:%S')", tool_input='None', log=" It seems there was a mistake in the provided function syntax. Here's how I would answer the question using the `get_now()` function as stated earlier:\n\nThought: To find the current time, I'll use the `get_now()` function with the format specified.\nAction: get_now(format='%Y-%m-%d %H:%M:%S')\nAction Input: None"), "get_now(format='%Y-%m-%d %H:%M:%S') is not a valid tool, try one of [get_now].")]}
{'agent_outcome': AgentAction(tool="get_now(format='%Y-%m-%d %H:%M:%S')", tool_input='None', log=" It seems there might have been an error in the provided tools list. The `get_now()` function should be able to provide the current time if it's correctly implemented. Here's how I would answer the question using the `get_now()` function as stated earlier:\n\nThought: To find the current time, I'll use the `get_now()` function with the format specified.\nAction: get_now(format='%Y-%m-%d %H:%M:%S')\nAction Input: None")}
Called `execute_tools`
Calling tool: get_now(format='%Y-%m-%d %H:%M:%S')
{'intermediate_steps': [(AgentAction(tool="get_now(format='%Y-%m-%d %H:%M:%S')", tool_input='None', log=" It seems there might have been an error in the provided tools list. The `get_now()` function should be able to provide the current time if it's correctly implemented. Here's how I would answer the question using the `get_now()` function as stated earlier:\n\nThought: To find the current time, I'll use the `get_now()` function with the format specified.\nAction: get_now(format='%Y-%m-%d %H:%M:%S')\nAction Input: None"), "get_now(format='%Y-%m-%d %H:%M:%S') is not a valid tool, try one of [get_now].")]}
{'agent_outcome': AgentFinish(return_values={'output': 'Unable to answer due to missing `get_now()` tool in provided tools list.'}, log=" It appears there seems to be an issue with the provided tools list, as it does not include the `get_now()` function. Without this function, I'm unable to provide the current time. I recommend checking the provided tools list for accuracy and ensuring that `get_now()` is included.\n\nFinal Answer: Unable to answer due to missing `get_now()` tool in provided tools list.")}

or...

  File "/home/test/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 170, in <lambda>
    lambda inner_input: self.parse_result(
                        ^^^^^^^^^^^^^^^^^^
  File "/home/test/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 221, in parse_result
    return self.parse(result[0].text)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/agents/output_parsers/react_single_input.py", line 84, in parse
    raise OutputParserException(
langchain_core.exceptions.OutputParserException: Could not parse LLM output: ` To find the current time, I will use the `get_now()` function. This function does not require any input, so there is no Action Input.

I wonder if this can be fixed by the framework itself or it's just the LLM that isn't that great...? For now I must say that I prefer the reactAgent from llamaindex in terms of reliability.

Docteur-RS commented 2 months ago

I would also point the fact that .bind_tools is still not supported.

@tool
def get_now(format: str = "%Y-%m-%d %H:%M:%S"):
    """
    Get the current time
    """
    return datetime.now().strftime(format)
tools = [get_now]
llm_with_tools = llm.bind_tools(tools)
--> [951](https://vscode-remote+wsl-002bubuntu-002d20-002e04.vscode-resource.vscode-cdn.net/home/test/langraph/~/.cache/pypoetry/virtualenvs/langraph-IjL_O-08-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:951)     raise NotImplementedError()

NotImplementedError:

In my previous post I use the ToolExecutor(tools) class. But this is merrely a trick.

I still have no idea how to create a "ToolNode" like presented in the LG tutorial and it's getting tiring...

IMHO the whole framework it too complicated and lacks correct abstractions.

lalanikarim commented 2 months ago

I have a simple Langgraph agent in this notebook that uses ToolNode. I hope it helps. https://github.com/lalanikarim/notebooks/blob/main/LangGraph-MessageGraph-OllamaFunctions.ipynb

Docteur-RS commented 2 months ago

@lalanikarim Used your notebook. It's great ! Thx for the work and effort.

This should be exposed in the documentation about OllamaFunctions. Is there anyone that can be pinged to update the page doc or something ?

lalanikarim commented 2 months ago

@lalanikarim Used your notebook. It's great ! Thx for the work and effort.

This should be exposed in the documentation about OllamaFunctions. Is there anyone that can be pinged to update the page doc or something ?

Thanks. I'll create a PR with documentation shortly.

lalanikarim commented 2 months ago

@Docteur-RS I've created a PR #23179 to added LangGraph/Tool Calling documentation to OllamaFunctions

Take a look an let me know what you think https://langchain-git-fork-lalanikarim-ollama-function-fbb983-langchain.vercel.app/v0.2/docs/integrations/chat/ollama_functions/#ollamafunctions-and-agents

Docteur-RS commented 2 months ago

@lalanikarim Looking great ! It will help the community for sure ! I'll close this issue when your PR is merged.

Thx for your service. ^^


On a side note, I tried to execute the Supervisor notebook from langgraph but they use a llm.bind_functions that doesn't exists in OllamaFunctions.

supervisor_chain = (
    prompt
    | llm.bind_functions(functions=[function_def], function_call="route")
    | JsonOutputFunctionsParser()
)

I guess that it could be tweaked in some way but the complexity of the framework and the lack of interoperability between LLM providers makes it a hard journey for people relying on local LLMs only. It's a shame that documentation only covers chatGPT.

/me mumbling

lalanikarim commented 2 months ago

@Docteur-RS Unfortunately, things have been so much movent and things have been moving so fast within this space, a lot of stuff that was release just a few months ago is no longer relevant. The Supervisor notebook uses AgentExecutor, for instance, to build Agents. AgentExecutor is already considered legacy in favor to LangGraph which is the direction going forward. https://python.langchain.com/v0.2/docs/how_to/agent_executor/ https://python.langchain.com/v0.2/docs/how_to/migrate_agent/ This seems to be the case for any black-box like implementations that can be replaced by more composable counterparts. I would suggest consulting with v0.2 of the documentation when working with any notebooks from a few months ago.

wangyaoyong-wyy commented 1 month ago

from langgraph.prebuilt.chat_agent_executor import create_react_agent  使用这个创建代理的时候,我希望流式输出,因此我使用了astream_events方法,此时提示  TypeError: Object of type StructuredTool is not JSON serializable  正常使用invoke和stream都是没有问题的,但是我使用astream_events就出现问题,我需要怎么解决

from langchain_core.tools import tool
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langgraph.prebuilt.chat_agent_executor import create_react_agent
from fastapi import FastAPI, HTTPException

app = FastAPI()

llm = OllamaFunctions(model="qwen2", format="json")

@tool
def multy(a: int, b: int, c: int) -> int:
    """计算三个数的乘法"""
    return a * b * c

@tool
def add(a, b, c: int) -> int:
    """计算三个数相加"""
    return a + b + c

tools = [multy, add]
build = create_react_agent(llm, tools)

@app.post("/chat/")
async def chat(message: str):
    try:
        result = []
        async for event in build.astream_events(
            {"messages": message}, version="v2"
        ):
            kind = event["event"]
            if kind == "on_chat_model_stream":
                content = event["data"]["chunk"].content
                if content:
                    print(content)
                    result.append(content)
        return {"response": "".join(result)}
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)
wangyaoyong-wyy commented 1 month ago

from langgraph.prebuilt.chat_agent_executor import create_react_agent

When using this to create a proxy, I wanted to stream the output, so I used the astream_ events method, which prompted me

TypeError: Object of type StructuredTool is not JSON serializable

There are no issues with using both invoice and stream normally, but I am experiencing problems with using astream_ events. What do I need to do to resolve this issue

import asyncio
from langchain_core.tools import tool
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langgraph.prebuilt.chat_agent_executor import create_react_agent

llm = OllamaFunctions(model="qwen2",format="json")

@tool
def multy(a: int, b: int, c: int) -> int:
    """计算三个数的乘法"""
    return a * b * c

@tool
def add(a, b, c: int) -> int:
    """计算三个数相加"""
    return a + b + c

tools = [multy, add]
build = create_react_agent(llm,tools)
# print(build.invoke({"messages": "三乘以四乘以五的结果是多少"}))
async def main():
    async for event in build.astream_events(
            {"messages": "三乘以四乘以五的结果是多少"}, version="v2"
    ):
        kind = event["event"]
        if kind == "on_chat_model_stream":
            content = event["data"]["chunk"].content
            if content:
                print(content)

if __name__ == '__main__':
    asyncio.run(main())
hega4444 commented 1 month ago

I see this keeps active. For own purposes I went for making an emulator of the assistants openAI API, because the software I had built was around that API. Now this emulator can work directly with OpenAI, Azure and Ollama models using function calling "seamlessly", still work on it but thought some other developer may be interested. Happy to go on collabs (also looking for freelancing jobs right now). Here I leave the link to the repo: https://github.com/hega4444/llam/blob/main/README.md

Docteur-RS commented 1 day ago

I ended up using Yacana as multi agent framework. Tool calling works way better.