aws-samples / amazon-bedrock-workshop

This is a workshop designed for Amazon Bedrock a foundational model service.
https://catalog.us-east-1.prod.workshops.aws/workshops/a4bdb007-5600-4368-81c5-ff5b4154f518/en-US/20-intro
MIT No Attribution
1.39k stars 592 forks source link

Error in 07_Agents/00_LLM_Claude_Agent_Tools.ipynb #49

Closed michaelhsieh42 closed 5 months ago

michaelhsieh42 commented 11 months ago

https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/07_Agents/00_LLM_Claude_Agent_Tools.ipynb

I had to upgrade langchain to 0.0.302 and install langchain_experimental 0.0.22 to get the notebook going, %pip install --upgrade langchain langchain_experimental. Then change how the plan_and_execute module is loaded to

from langchain_experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner

instead of

from langchain.experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner

With these changes I was able to proceed, however, towards the end I wasn't able to get pass PlanAndExecute example. I got the following error.

planner = load_chat_planner(plan_llm)
executor = load_agent_executor(execute_llm, tools, verbose=True)
pae_agent = PlanAndExecute(planner=planner, executor=executor, verbose=True, max_iterations=1)
---------------------------------------------------------------------------
ConfigError                               Traceback (most recent call last)
Cell In[20], line 2
      1 planner = load_chat_planner(plan_llm)
----> 2 executor = load_agent_executor(execute_llm, tools, verbose=True)
      3 pae_agent = PlanAndExecute(planner=planner, executor=executor, verbose=True, max_iterations=1)

File /opt/conda/lib/python3.10/site-packages/langchain_experimental/plan_and_execute/executors/agent_executor.py:46, in load_agent_executor(llm, tools, verbose, include_task_in_prompt)
     43     input_variables.append("objective")
     44     template = TASK_PREFIX + template
---> 46 agent = StructuredChatAgent.from_llm_and_tools(
     47     llm,
     48     tools,
     49     human_message_template=template,
     50     input_variables=input_variables,
     51 )
     52 agent_executor = AgentExecutor.from_agent_and_tools(
     53     agent=agent, tools=tools, verbose=verbose
     54 )
     55 return ChainExecutor(chain=agent_executor)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/structured_chat/base.py:132, in StructuredChatAgent.from_llm_and_tools(cls, llm, tools, callback_manager, output_parser, prefix, suffix, human_message_template, format_instructions, input_variables, memory_prompts, **kwargs)
    126 llm_chain = LLMChain(
    127     llm=llm,
    128     prompt=prompt,
    129     callback_manager=callback_manager,
    130 )
    131 tool_names = [tool.name for tool in tools]
--> 132 _output_parser = output_parser or cls._get_default_output_parser(llm=llm)
    133 return cls(
    134     llm_chain=llm_chain,
    135     allowed_tools=tool_names,
    136     output_parser=_output_parser,
    137     **kwargs,
    138 )

File /opt/conda/lib/python3.10/site-packages/langchain/agents/structured_chat/base.py:65, in StructuredChatAgent._get_default_output_parser(cls, llm, **kwargs)
     61 @classmethod
     62 def _get_default_output_parser(
     63     cls, llm: Optional[BaseLanguageModel] = None, **kwargs: Any
     64 ) -> AgentOutputParser:
---> 65     return StructuredChatOutputParserWithRetries.from_llm(llm=llm)

File /opt/conda/lib/python3.10/site-packages/langchain/agents/structured_chat/output_parser.py:82, in StructuredChatOutputParserWithRetries.from_llm(cls, llm, base_parser)
     80 if llm is not None:
     81     base_parser = base_parser or StructuredChatOutputParser()
---> 82     output_fixing_parser = OutputFixingParser.from_llm(
     83         llm=llm, parser=base_parser
     84     )
     85     return cls(output_fixing_parser=output_fixing_parser)
     86 elif base_parser is not None:

File /opt/conda/lib/python3.10/site-packages/langchain/output_parsers/fix.py:45, in OutputFixingParser.from_llm(cls, llm, parser, prompt)
     42 from langchain.chains.llm import LLMChain
     44 chain = LLMChain(llm=llm, prompt=prompt)
---> 45 return cls(parser=parser, retry_chain=chain)

File /opt/conda/lib/python3.10/site-packages/langchain/load/serializable.py:74, in Serializable.__init__(self, **kwargs)
     73 def __init__(self, **kwargs: Any) -> None:
---> 74     super().__init__(**kwargs)
     75     self._lc_kwargs = kwargs

File /opt/conda/lib/python3.10/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()

File /opt/conda/lib/python3.10/site-packages/pydantic/main.py:1076, in pydantic.main.validate_model()

File /opt/conda/lib/python3.10/site-packages/pydantic/fields.py:860, in pydantic.fields.ModelField.validate()

ConfigError: field "retry_chain" not yet prepared so type is still a ForwardRef, you might need to call OutputFixingParser.update_forward_refs().

Please check if there is anything missing or library version mismatch. Thank you.

w601sxs commented 5 months ago

Closing this; please refer to the latest bedrock workshop update and reopen if still relevant