Open vivienneanthony opened 2 months ago
I was able to get Illama LLM to run some but I notice similar errors. It seems several jsons have functions and params sent to MemGPT. It is causing multiple problems. I tried llama.cpp and Ollama with MemGPT. I prefer MemGPT over ChatGPT but is there is a LLM that work with llama.cpp + MemGPT
These are some of the common problems
"Failed to decode valid MemGPT JSON from LLM output:\n=====\n{raw_llm_output}\n=====")
"Failed to parse JSON from local LLM response - error: {str(e)}")
Failed to decode JSON from LLM output: "function": - error
``
These are the common ones.
Describe the bug I am getting a function error when testing the connection to Ollama. It seems I get a error staying. To me it seems the because of the model and Ollama results it's not working with Memgpt so it's creates a error.
`~$ memgpt run [nltk_data] Downloading package punkt_tab to [nltk_data] /home/vivienne/.local/lib/python3.10/site- [nltk_data] packages/llama_index/core/_static/nltk_cache... [nltk_data] Unzipping tokenizers/punkt_tab.zip.
? Would you like to select an existing agent? Yes ? Select agent: TremendousSeashell
🔁 Using existing agent TremendousSeashell
Hit enter to begin (will request first MemGPT message)
An exception occurred when running agent.step(): Traceback (most recent call last): File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/llm_chat_completion_wrappers/dolphin.py", line 231, in output_to_chat_completion_response function_name = function_json_output["function"] KeyError: 'function'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/chat_completion_proxy.py", line 191, in get_chat_completion chat_completion_result = llm_wrapper.output_to_chat_completion_response(result) File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/llm_chat_completion_wrappers/dolphin.py", line 234, in output_to_chat_completion_response raise LLMJSONParsingError(f"Received valid JSON from LLM, but JSON was missing fields: {str(e)}") memgpt.errors.LLMJSONParsingError: Received valid JSON from LLM, but JSON was missing fields: 'function'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/main.py", line 459, in run_agent_loop new_messages, user_message, skip_next_user_input = process_agent_step(user_message, no_verify) File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/main.py", line 427, in process_agent_step new_messages, heartbeat_request, function_failed, token_warning, tokens_accumulated = memgpt_agent.step( File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 800, in step raise e File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 698, in step response = self._get_ai_reply( File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 409, in _get_ai_reply raise e File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 378, in _get_ai_reply response = create( File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 223, in wrapper raise e File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 196, in wrapper return func(*args, **kwargs) File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 464, in create return get_chat_completion( File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/chat_completion_proxy.py", line 194, in get_chat_completion raise LocalLLMError(f"Failed to parse JSON from local LLM response - error: {str(e)}") memgpt.errors.LocalLLMError: Failed to parse JSON from local LLM response - error: Received valid JSON from LLM, but JSON was missing fields: 'function' ? Retry agent.step()? No
An exception occurred when running agent.step(): Traceback (most recent call last): File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/llm_chat_completion_wrappers/dolphin.py", line 231, in output_to_chat_completion_response function_name = function_json_output["function"] KeyError: 'function'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/chat_completion_proxy.py", line 191, in get_chat_completion chat_completion_result = llm_wrapper.output_to_chat_completion_response(result) File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/llm_chat_completion_wrappers/dolphin.py", line 234, in output_to_chat_completion_response raise LLMJSONParsingError(f"Received valid JSON from LLM, but JSON was missing fields: {str(e)}") memgpt.errors.LLMJSONParsingError: Received valid JSON from LLM, but JSON was missing fields: 'function'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/main.py", line 459, in run_agent_loop new_messages, user_message, skip_next_user_input = process_agent_step(user_message, no_verify) File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/main.py", line 427, in process_agent_step new_messages, heartbeat_request, function_failed, token_warning, tokens_accumulated = memgpt_agent.step( File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 800, in step raise e File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 698, in step response = self._get_ai_reply( File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 409, in _get_ai_reply raise e File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/agent.py", line 378, in _get_ai_reply response = create( File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 223, in wrapper raise e File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 196, in wrapper return func(*args, **kwargs) File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/llm_api/llm_api_tools.py", line 464, in create return get_chat_completion( File "/home/vivienne/.local/lib/python3.10/site-packages/memgpt/local_llm/chat_completion_proxy.py", line 194, in get_chat_completion raise LocalLLMError(f"Failed to parse JSON from local LLM response - error: {str(e)}") memgpt.errors.LocalLLMError: Failed to parse JSON from local LLM response - error: Received valid JSON from LLM, but JSON was missing fields: 'function' ?`
Please describe your setup
[ ] How did you install memgpt? pip install pymemgpt
[ ] Describe your setup
What's your OS (Windows/MacOS/Linux)? I have Ubuntu 22.04 running with 16 gigs of ram and a Nvidia gtx 1660 super. Currently Ollama and Memgpt is installed. It's not powerful but it can run AI.
How are you running
memgpt
? (cmd.exe
/Powershell/Anaconda Shell/Terminal)I am running from command line
Screenshots I know memgpt can connect to the ollama ai interface with the dolphin mistral model.
Additional context
I tried both wrapper for mistral and airboros. Similiar error.
MemGPT Config Please attach your
~/.memgpt/config
file or copy past it below.[defaults] preset = memgpt_chat persona = sam_pov human = basic
[model] model = dolphin2.2-mistral:7b-q6_K model_endpoint = http://localhost:11434 model_endpoint_type = ollama model_wrapper = airoboros-l2-70b-2.1 context_window = 8192
[embedding] embedding_endpoint_type = local embedding_model = BAAI/bge-small-en-v1.5 embedding_dim = 384 embedding_chunk_size = 300
[archival_storage] type = chroma path = /home/vivienne/.memgpt/chroma
[recall_storage] type = sqlite path = /home/vivienne/.memgpt
[metadata_storage] type = sqlite path = /home/vivienne/.memgpt
[version] memgpt_version = 0.3.25
[client] anon_clientid = 00000000-0000-0000-0000-000000000000
If you're not using OpenAI, please provide additional information on your local LLM setup:
Local LLM details
If you are trying to run MemGPT with local LLMs, please provide the following information:
[ ] The exact model you're trying to use (e.g.
dolphin-2.1-mistral-7b.Q6_K.gguf
)[ ] The local LLM backend you are using (web UI? LM Studio?)
I am using Ollama. I have not figured how to add a web ui that I can access memgpt or ollama chat at the same time.
[ ] Your hardware for the local LLM backend (local computer? operating system? remote RunPod?)
/0 bus Motherboard /0/0 memory 16GiB System memory /0/1 processor Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz /0/100 bridge 2nd Generation Core Processor Family DRAM Controller /0/100/1 bridge Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port /0/100/1/0 display TU116 [GeForce GTX 1660 SUPER] /0/100/1/0.1 card1 multimedia TU116 High Definition Audio Controller