While the documentation suggests ChatOpenAI() and OpenAI() are supported, the functionality fails when using these LLMs, even with the updated agent initialisation:
Error encounter:Traceback (most recent call last): File "/path/to/file.py", line 58, in <module> agent_chain = create_structured_chat_agent( File "/path/to/langchain/structured_chat/base.py", line 283, in create_structured_chat_agent tools=tools_renderer(list(tools)), File "/path/to/langchain_core/tools/render.py", line 58, in render_text_description_and_args args_schema = str(tool.args) File "/path/to/pydantic/main.py", line 853, in __getattr__ return super().__getattribute__(item) # Raises AttributeError if appropriate File "/path/to/langchain_core/tools/base.py", line 446, in args return self.get_input_schema().model_json_schema()["properties"] File "/path/to/pydantic/json_schema.py", line 2277, in model_json_schema raise AttributeError('model_json_schema() must be called on a subclass of BaseModel, not BaseModel itself.') AttributeError: model_json_schema() must be called on a subclass of BaseModel, not BaseModel itself.
Observations:
The issue seems to stem from tool.args and how it handles the schema with Pydantic. The error is raised when model_json_schema() is called on an object that isn't a subclass of BaseModel.
This problem occurs regardless of using initialize_agent or create_structured_chat_agent. The provided tools are not handled correctly in either approach.
Expected Behavior:
The agent should be able to use tools, and their arguments should be processed correctly without raising schema-related errors.
URL
https://python.langchain.com/docs/integrations/tools/playwright/
Checklist
Issue with current documentation:
The current documentation for using agents with LangChain offers an incorrect or incomplete flow. Specifically, the following examples fail:
` from langchain.agents import AgentType, initialize_agent from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic( model_name="claude-3-haiku-20240307", temperature=0 ) # or any other LLM, e.g., ChatOpenAI(), OpenAI()
agent_chain = initialize_agent( tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, ) `
While the documentation suggests ChatOpenAI() and OpenAI() are supported, the functionality fails when using these LLMs, even with the updated agent initialisation:
agent_chain = create_structured_chat_agent( llm=llm, tools=tools, prompt=chat_prompt )
Error encounter:
Traceback (most recent call last): File "/path/to/file.py", line 58, in <module> agent_chain = create_structured_chat_agent( File "/path/to/langchain/structured_chat/base.py", line 283, in create_structured_chat_agent tools=tools_renderer(list(tools)), File "/path/to/langchain_core/tools/render.py", line 58, in render_text_description_and_args args_schema = str(tool.args) File "/path/to/pydantic/main.py", line 853, in __getattr__ return super().__getattribute__(item) # Raises AttributeError if appropriate File "/path/to/langchain_core/tools/base.py", line 446, in args return self.get_input_schema().model_json_schema()["properties"] File "/path/to/pydantic/json_schema.py", line 2277, in model_json_schema raise AttributeError('model_json_schema() must be called on a subclass of BaseModel, not BaseModel itself.') AttributeError: model_json_schema() must be called on a subclass of BaseModel, not BaseModel itself.
Observations: The issue seems to stem from tool.args and how it handles the schema with Pydantic. The error is raised when model_json_schema() is called on an object that isn't a subclass of BaseModel. This problem occurs regardless of using initialize_agent or create_structured_chat_agent. The provided tools are not handled correctly in either approach. Expected Behavior: The agent should be able to use tools, and their arguments should be processed correctly without raising schema-related errors.Idea or request for content:
the documentation needs the update