Open nick-youngblut opened 2 months ago
There is a clear difference in the response generated by:
agent_with_chat_history.invoke(
{"input": "Hi, I'm polly! What's the output of magic_function of 3?"}, config
)
...depending on the model:
{'input': "Hi, I'm polly! What's the output of magic_function of 3?",
'chat_history': [],
'output': 'Hi Polly! The output of the magic function for the input 3 is 5.'}
{'input': "Hi, I'm polly! What's the output of magic_function of 3?",
'chat_history': [],
'output': [{'text': '\n\nThe output of the magic_function for the input 3 is 5.',
'type': 'text',
'index': 0}]}
URL
https://python.langchain.com/v0.2/docs/how_to/migrate_agent/#memory
Checklist
Issue with current documentation:
If I swap
model = ChatOpenAI(model="gpt-4o")
for:ChatAnthropicVertex(model_name="claude-3-haiku@20240307", location="us-east5", project="my_gcp_project")
, then the memory example throws the following error:...and it is unclear why.
My installed langchain packages:
Idea or request for content:
It would be very helpful to show how one must change the memory example code, depending on the LLM used