Open mortilla opened 9 months ago
Was this ever resolved? Running into the same issue.
@davidkundrats what model wrapper + model are you using? I can try and replicate the issue + create a patch.
Essentially some model wrappers will do prompt formatting / parsing in a way that injects inner_thoughts
into the function call (as opposed to it being outside the function call, e.g. as part of content
in an OpenAI style message).
The model wrappers that put inner_thoughts
into the function call also have to strip it back out of the function call before the function is handed off to the function executor.
So basically this error is happening because inner_thoughts
did not get "popped" off the kwargs after being injected into the functions. To show you the code, for the llama3
wrapper, this happens here: https://github.com/cpacker/MemGPT/blob/832e07d5bfd7687ccd6632ee3c911f042c657570/memgpt/local_llm/llm_chat_completion_wrappers/llama3.py#L283-L285
@cpacker i had made a bandaid fix that covers the inner_monologue issue by doing basically what you said. thanks for that reply - I had a hunch it was the model wrapper. for future reference it was azure's gpt-4 model.
while I have you, is there a way to see the uncompiled dev portal code somewhere? id like to make a few changes custom to my use case.
Describe the bug Then the model replies with a JSON like this:
MemGPT replies back:
However the function descriptions says:
And therefore the function call is correct and should be accepted.
Please describe your setup
git clone
Screenshots N/A
Additional context memgpt-prompt.txt
MemGPT Config Please attach your
~/.memgpt/config
file or copy past it below. config-redacted.txtIf you're not using OpenAI, please provide additional information on your local LLM setup:
Local LLM details
If you are trying to run MemGPT with local LLMs, please provide the following information:
dolphin-2.1-mistral-7b.Q6_K.gguf
) miqudev/miqu-1-70b