Open arthur-lachini-advisia opened 1 month ago
sorry when you say "coder agent" and "tutorial" - you are referring to this one right? https://langchain-ai.github.io/langgraph/tutorials/multi_agent/agent_supervisor/
Yes, sorry for the lack of information on that part
This is an issue in langchain, not in langgraph, so going to transfer. here is a minimal reproducible example:
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain_experimental.tools import PythonREPLTool
from langchain.agents import AgentExecutor, create_openai_tools_agent
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant",
),
MessagesPlaceholder(variable_name="messages"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
@tool
def get_weather(location: str):
"""Get weather for location"""
return "Sunny and 75 degrees"
llm = ChatOpenAI(model="gpt-4o-mini")
# this works
tools = [get_weather]
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
executor.invoke({"messages": [("human", "what's the weather in sf")]})
# this doesn't
tools = [PythonREPLTool()]
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
executor.invoke({"messages": [("human", "Code hello world and print it to the terminal")]})
that being said, it works w/ langgraph's create_react_agent
, so will just update the notebook
I'm also facing the same issue.
that being said, it works w/ langgraph's
create_react_agent
, so will just update the notebook.
I'm facing the same issue and use the create_react_agent
function.
facing the same issue
seems to be related to the PR https://github.com/langchain-ai/langchain/pull/24038 which adds to run_manager to the tool arg https://github.com/langchain-ai/langchain/blob/7dd6b32991e81582cb30588b84871af04ecdc76c/libs/core/langchain_core/tools.py#L603 and make it fail to serialize
going back to pip install langchain-core==0.2.12
seems to fix it for me
cc @baskaryan
@wulifu2hao
site-packages\langchain_core\tools.py tool_kwargs and tool_input are same variable, tool_input will be changed after line 603.
Code Snippet, line 601:
tool_args, tool_kwargs = self._to_args_and_kwargs(tool_input)
if signature(self._run).parameters.get("run_manager"):
tool_kwargs["run_manager"] = run_manager
workaround solution, line 530:
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:
tool_input = self._parse_input(tool_input)
# For backwards compatibility, if run_input is a string,
# pass as a positional argument.
if isinstance(tool_input, str):
return (tool_input,), {}
else:
return (), dict(tool_input)
facing same issue
going back to
pip install langchain-core==0.2.12
seems to fix
Thanks for the saving my day! my guy
Will it be fixed?
sorry when you say "coder agent" and "tutorial" - you are referring to this one right? https://langchain-ai.github.io/langgraph/tutorials/multi_agent/agent_supervisor/
Just a basic AgentExecutor with a BaseTool without args_schema should be good enough to reproduce this. While new features like (strict mode) are being rolled out, but basic regression like these will prevent from upgrades. Will this be fixed soon?
Related issue: https://github.com/langchain-ai/langchain/issues/24614 Example tutorial: https://python.langchain.com/v0.2/docs/integrations/tools/bing_search/
I'd rather not downgrade, but I'm having trouble moving any further while this is still an issue. Does anyone have a workaround besides downgrading or a time frame to a fix?
Using the Python REPL example from the multi-agent collaboration notebook resolved the issue for me:
# from langchain_experimental.tools import PythonREPLTool
from langchain_core.tools import tool
from langchain_experimental.utilities import PythonREPL
# python_repl_tool = PythonREPLTool()
repl = PythonREPL()
@tool
def python_repl_tool(
code: Annotated[str, "The python code to execute to generate your chart."],
):
"""Use this to execute python code. If you want to see the output of a value,
you should print it out with `print(...)`. This is visible to the user."""
try:
result = repl.run(code)
except BaseException as e:
return f"Failed to execute. Error: {repr(e)}"
result_str = f"Successfully executed:\n```python\n{code}\n```\nStdout: {result}"
return (
result_str + "\n\nIf you have completed all tasks, respond with FINAL ANSWER."
)
@wulifu2hao
site-packages\langchain_core\tools.py tool_kwargs and tool_input are same variable, tool_input will be changed after line 603.
Code Snippet, line 601:
tool_args, tool_kwargs = self._to_args_and_kwargs(tool_input) if signature(self._run).parameters.get("run_manager"): tool_kwargs["run_manager"] = run_manager
workaround solution, line 530:
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]: tool_input = self._parse_input(tool_input) # For backwards compatibility, if run_input is a string, # pass as a positional argument. if isinstance(tool_input, str): return (tool_input,), {} else: return (), dict(tool_input)
This has worked for me.. Thanks a lot
@karthikcs nice wrapping tool_input with dict() seemed to work for me as well, the location has been changed to https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools/base.py#L477 instead of 530
Edit: It's not the only place in tools.py with issues parsing tool_input such as in the _parse_input function.
Using the Python REPL example from the multi-agent collaboration notebook resolved the issue for me:
# from langchain_experimental.tools import PythonREPLTool from langchain_core.tools import tool from langchain_experimental.utilities import PythonREPL # python_repl_tool = PythonREPLTool() repl = PythonREPL() @tool def python_repl_tool( code: Annotated[str, "The python code to execute to generate your chart."], ): """Use this to execute python code. If you want to see the output of a value, you should print it out with `print(...)`. This is visible to the user.""" try: result = repl.run(code) except BaseException as e: return f"Failed to execute. Error: {repr(e)}" result_str = f"Successfully executed:\n```python\n{code}\n```\nStdout: {result}" return ( result_str + "\n\nIf you have completed all tasks, respond with FINAL ANSWER." )
This is the answer! Thanks. They have it correct on this notebook -https://github.com/langchain-ai/langgraph/blob/fc95028738232572c05827a074f8d0c606f5c0ca/examples/multi_agent/multi-agent-collaboration.ipynb
@tahsinalamin It's cool that that worked for you, but there is a much larger issue that doesn't have anything to do with PythonREPL and is how the library is handling data types in langchain_core/tools/base.py.
Multiple tutorials are broken even with the latest release 0.2.33
seems to be related to the PR #24038 which adds to run_manager to the tool arg https://github.com/langchain-ai/langchain/blob/7dd6b32991e81582cb30588b84871af04ecdc76c/libs/core/langchain_core/tools.py#L603 and make it fail to serialize
going back to
pip install langchain-core==0.2.12
seems to fix it for mecc @baskaryan
Thanks it worked for me !
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
I tried to replicate the tutorial in my local machine, but the coder function does not works as it suposed to. The ressearcher function works just fine and can do multiple consecutive researchers but as soone as the coder agent is called, it breakes the function. I've annexed prints of the langsmith dashboard to provide further insight on the error.
System Info
Windows 10 Python 3.12.4
aiohttp==3.9.5 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.4.0 asttokens==2.4.1 attrs==23.2.0 certifi==2024.7.4 charset-normalizer==3.3.2 colorama==0.4.6 comm==0.2.2 contourpy==1.2.1 cycler==0.12.1 dataclasses-json==0.6.7 debugpy==1.8.2 decorator==5.1.1 distro==1.9.0 executing==2.0.1 fonttools==4.53.1 frozenlist==1.4.1 greenlet==3.0.3 h11==0.14.0 httpcore==1.0.5 httpx==0.27.0 idna==3.7 ipykernel==6.29.5 ipython==8.26.0 jedi==0.19.1 jsonpatch==1.33 jsonpointer==3.0.0 jupyter_client==8.6.2 jupyter_core==5.7.2 kiwisolver==1.4.5 langchain==0.2.11 langchain-community==0.2.10 langchain-core==0.2.23 langchain-experimental==0.0.63 langchain-openai==0.1.17 langchain-text-splitters==0.2.2 langchainhub==0.1.20 langgraph==0.1.10 langsmith==0.1.93 marshmallow==3.21.3 matplotlib==3.9.1 matplotlib-inline==0.1.7 multidict==6.0.5 mypy-extensions==1.0.0 nest-asyncio==1.6.0 numpy==1.26.4 openai==1.37.0 orjson==3.10.6 packaging==24.1 parso==0.8.4 pillow==10.4.0 platformdirs==4.2.2 prompt_toolkit==3.0.47 psutil==6.0.0 pure_eval==0.2.3 pydantic==2.8.2 pydantic_core==2.20.1 Pygments==2.18.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 pywin32==306 PyYAML==6.0.1 pyzmq==26.0.3 regex==2024.5.15 requests==2.32.3 six==1.16.0 sniffio==1.3.1 SQLAlchemy==2.0.31 stack-data==0.6.3 tenacity==8.5.0 tiktoken==0.7.0 tornado==6.4.1 tqdm==4.66.4 traitlets==5.14.3 types-requests==2.32.0.20240712 typing-inspect==0.9.0 typing_extensions==4.12.2 urllib3==2.2.2 wcwidth==0.2.13 yarl==1.9.4