langchain-ai / langchain-aws

Build LangChain Applications on AWS
MIT License
63 stars 46 forks source link

Feat: claude 3 tool calling #70

Closed laithalsaadoon closed 3 weeks ago

laithalsaadoon commented 3 weeks ago

fixes: #65

Checks if model_id is claude-3, and then uses invoke_model with proper tools in the request input body.

3coins commented 3 weeks ago

cc @baskaryan

laithalsaadoon commented 3 weeks ago

I upgraded poetry.lock to fix the lint errors, but maybe we want that as a separate PR? just fyi

laithalsaadoon commented 3 weeks ago

@3coins @baskaryan can you please have a look, run the workflows and let me know what is needed to merge? TY

laithalsaadoon commented 3 weeks ago

Need to add a test case for agent executor and resolve:


KeyError Traceback (most recent call last) Cell In[14], line 3 1 # Create an agent executor by passing in the agent and tools 2 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) ----> 3 agent_executor.invoke({"input": "*****"})

File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs) 164 except BaseException as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 169 if include_run_info:

File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs) 153 try: 154 self._validate_inputs(inputs) 155 outputs = ( --> 156 self._call(inputs, run_manager=run_manager) 157 if new_arg_supported 158 else self._call(inputs) 159 ) 161 final_outputs: Dict[str, Any] = self.prep_outputs( 162 inputs, outputs, return_only_outputs 163 ) ... 267 content: Union[str, List] 269 if not isinstance(message.content, str): 270 # parse as dict

KeyError: 'AIMessageChunk'

ccurme commented 3 weeks ago

Need to add a test case for agent executor and resolve:

KeyError Traceback (most recent call last) Cell In[14], line 3 1 # Create an agent executor by passing in the agent and tools 2 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) ----> 3 agent_executor.invoke({"input": "*****"})

File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs) 164 except BaseException as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 169 if include_run_info:

File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs) 153 try: 154 self._validate_inputs(inputs) 155 outputs = ( --> 156 self._call(inputs, run_manager=run_manager) 157 if new_arg_supported 158 else self._call(inputs) 159 ) 161 final_outputs: Dict[str, Any] = self.prep_outputs( 162 inputs, outputs, return_only_outputs 163 ) ... 267 content: Union[str, List] 269 if not isinstance(message.content, str): 270 # parse as dict

KeyError: 'AIMessageChunk'

Good catch. Looks like this was fixed in the legacy version but not updated here: https://github.com/langchain-ai/langchain/pull/20801

laithalsaadoon commented 3 weeks ago

Need to add a test case for agent executor and resolve: KeyError Traceback (most recent call last) Cell In[14], line 3 1 # Create an agent executor by passing in the agent and tools 2 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) ----> 3 agent_executor.invoke({"input": "*****"}) File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, kwargs) 164 except BaseException as e: 165 run_manager.on_chain_error(e) --> 166 raise e 167 run_manager.on_chain_end(outputs) 169 if include_run_info: File ~/opt/anaconda3/envs/stores_langchain/lib/python3.10/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, kwargs) 153 try: 154 self._validate_inputs(inputs) 155 outputs = ( --> 156 self._call(inputs, run_manager=run_manager) 157 if new_arg_supported 158 else self._call(inputs) 159 ) 161 final_outputs: Dict[str, Any] = self.prep_outputs( 162 inputs, outputs, return_only_outputs 163 ) ... 267 content: Union[str, List] 269 if not isinstance(message.content, str): 270 # parse as dict KeyError: 'AIMessageChunk'

Good catch. Looks like this was fixed in the legacy version but not updated here: langchain-ai/langchain#20801

Fixed it. TY for the pointer

laithalsaadoon commented 3 weeks ago

FYI - I was unable to add a test for AgentExecutor. Seems like we are not importing langchain I'm sure for "reasons" to which I am ignorant 🙂 but I tested it on a local build

laithalsaadoon commented 3 weeks ago

@3coins @ccurme @baskaryan checking in

ssg-kstewart commented 3 weeks ago

@laithalsaadoon With this merged, I am seeing tool calls as expected despite my issue stating otherwise--I must have messed up the pip install when trying your branch initially.

I'm still seeing a problem with ordering of human/assistant messages when using a multi-agent system as I assume that tool calls are being tied to assistant responses. Do you have any suggestions to account for these scenarios?

ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeMo
del operation: Your API request included an `assistant` message in the final position, which would pre-fill the `assist
ant` response. When using tools, pre-filling the `assistant` response is not supported.
laithalsaadoon commented 3 weeks ago

@laithalsaadoon With this merged, I am seeing tool calls as expected despite my issue stating otherwise--I must have messed up the pip install when trying your branch initially.

I'm still seeing a problem with ordering of human/assistant messages when using a multi-agent system as I assume that tool calls are being tied to assistant responses. Do you have any suggestions to account for these scenarios?

ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeMo
del operation: Your API request included an `assistant` message in the final position, which would pre-fill the `assist
ant` response. When using tools, pre-filling the `assistant` response is not supported.

I'll try to repro in the next couple of days. My main motivation for getting this PR in was for multi-agent systems :)

ssg-kstewart commented 3 weeks ago

Thanks you--FWIW the lineup of messages upon entering the LLM run at the next step in the graph (as seen from _generate) is [<class 'langchain_core.messages.system.SystemMessage'>, <class 'langchain_core.messages.human.Human Message'>, <class 'langchain_core.messages.ai.AIMessage'>, <class 'langchain_core.messages.tool.ToolMessage'>, <class ' langchain_core.messages.ai.AIMessage'>]

laithalsaadoon commented 1 week ago

Thanks you--FWIW the lineup of messages upon entering the LLM run at the next step in the graph (as seen from _generate) is [<class 'langchain_core.messages.system.SystemMessage'>, <class 'langchain_core.messages.human.Human Message'>, <class 'langchain_core.messages.ai.AIMessage'>, <class 'langchain_core.messages.tool.ToolMessage'>, <class ' langchain_core.messages.ai.AIMessage'>]

@ssg-kstewart I can't speak for other LLM providers and if/how they accept AIMessages without alternating, but this change below worked for the multi-agent flow:


def agent_node(state, agent, name):
    last_message = state["messages"][-1]
    if isinstance(last_message, AIMessage):
        state["messages"][-1] = HumanMessage(content=last_message.content)
    result = agent.invoke(state)
    # We convert the agent output into a format that is suitable to append to the global state
    if isinstance(result, ToolMessage):
        pass
    else:
        result = AIMessage(**result.dict(exclude={"type", "name"}), name=name)
    return {
        "messages": [result],
        # Since we have a strict workflow, we can
        # track the sender so we know who to pass to next.
        "sender": name,
    }
ssg-kstewart commented 1 week ago

@laithalsaadoon Thanks, this gets me moving. A funny consequence of this workaround is that the LLM gets into a loop of self-complimenting thinking that the Human response is, of course, actual user input.