letta-ai / letta

Letta (formerly MemGPT) is a framework for creating LLM services with memory.
https://letta.com
Apache License 2.0
12.92k stars 1.42k forks source link

Keyword argument is unexpected, but it should be. #1018

Open mortilla opened 9 months ago

mortilla commented 9 months ago

Describe the bug Then the model replies with a JSON like this:

{
  "function": "send_message",
  "params": {
    "inner_thoughts": "Responding to the user's login message by introducing myself.",
    "message": "Hello, Chad! I'm MemGPT, your kind, thoughtful, and inquisitive companion. I'm here to chat and help you with your questions, thoughts, and ideas. Let's have a
great conversation together!"
  }
}

MemGPT replies back:

{
  "status": "Failed",
  "message": "Error calling function send_message: send_message() got an unexpected keyword argument 'inner_thoughts'",
  "time": "2024-02-16 09:21:31 PM JST+0900"
}

However the function descriptions says:

send_message:
  description: Sends a message to the human user.
  params:
    inner_thoughts: Deep inner monologue private to you only.
    message: Message contents. All unicode (including emojis) are supported.

And therefore the function call is correct and should be accepted.

Please describe your setup

Screenshots N/A

Additional context memgpt-prompt.txt

MemGPT Config Please attach your ~/.memgpt/config file or copy past it below. config-redacted.txt


If you're not using OpenAI, please provide additional information on your local LLM setup:

Local LLM details

If you are trying to run MemGPT with local LLMs, please provide the following information:

davidkundrats commented 2 months ago

Was this ever resolved? Running into the same issue.

cpacker commented 2 months ago

@davidkundrats what model wrapper + model are you using? I can try and replicate the issue + create a patch.

Essentially some model wrappers will do prompt formatting / parsing in a way that injects inner_thoughts into the function call (as opposed to it being outside the function call, e.g. as part of content in an OpenAI style message).

The model wrappers that put inner_thoughts into the function call also have to strip it back out of the function call before the function is handed off to the function executor.

So basically this error is happening because inner_thoughts did not get "popped" off the kwargs after being injected into the functions. To show you the code, for the llama3 wrapper, this happens here: https://github.com/cpacker/MemGPT/blob/832e07d5bfd7687ccd6632ee3c911f042c657570/memgpt/local_llm/llm_chat_completion_wrappers/llama3.py#L283-L285

davidkundrats commented 2 months ago

@cpacker i had made a bandaid fix that covers the inner_monologue issue by doing basically what you said. thanks for that reply - I had a hunch it was the model wrapper. for future reference it was azure's gpt-4 model.

while I have you, is there a way to see the uncompiled dev portal code somewhere? id like to make a few changes custom to my use case.