cpacker / MemGPT

Create LLM agents with long-term memory and custom tools 📚🦙
https://memgpt.readme.io
Apache License 2.0
11.27k stars 1.23k forks source link

memgpt run && python -m memgpt [Not working...] #248

Closed ProjCRys closed 10 months ago

ProjCRys commented 10 months ago

Output:


? Would you like to select an existing agent? No Creating new agent... Created new agent agent_3. Hit enter to begin (will request first MemGPT message) [A[K Warning: no wrapper specified for local LLM, using the default wrapper step() failed user_message = None error = Failed to decode JSON from LLM output: {More human than human is our motto. Failed to decode JSON from LLM output: {More human than human is our motto. An exception ocurred when running agent.step(): Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 392, in output_to_chat_completion_response function_json_output = json.loads(raw_llm_output) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 395, in output_to_chat_completion_response function_json_output = json.loads(raw_llm_output + "\n}") File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\main.py", line 526, in run_agent_loop ) = await memgpt_agent.step(user_message, first_message=False, skip_verify=no_verify) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 1084, in step raise e File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 1020, in step response = await get_ai_reply_async(model=self.model, message_sequence=input_message_sequence, functions=self.functions) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 160, in get_ai_reply_async raise e File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 141, in get_ai_reply_async response = await acreate( File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\openai_tools.py", line 115, in wrapper raise e File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\openai_tools.py", line 95, in wrapper return await func(*args, kwargs) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\openai_tools.py", line 124, in acompletions_with_backoff return get_chat_completion(kwargs) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\chat_completion_proxy.py", line 62, in get_chat_completion chat_completion_result = llm_wrapper.output_to_chat_completion_response(result) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 397, in output_to_chat_completion_response raise Exception(f"Failed to decode JSON from LLM output:\n{raw_llm_output}") Exception: Failed to decode JSON from LLM output: {More human than human is our motto. ? Retry agent.step()? (Y/n)


bat file used to run venv:


@echo off

:: Define the paths to Python and the venv set PYTHON_EXECUTABLE=C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\python.exe set VENV_DIR=D:\AI\ChatBots\MemGPT_Setup\venv

:: Create the virtual environment "%PYTHON_EXECUTABLE%" -m venv "%VENV_DIR%"

:: Check if the virtual environment creation was successful if %errorlevel% neq 0 ( echo An error occurred while creating the virtual environment. Press any key to exit. pause >nul exit /b %errorlevel% )

:: Activate the virtual environment call "%VENV_DIR%\Scripts\activate"

:: Install pymemgpt using pip pip install pymemgpt pip install transformers pip install torch

:: Check if the installation was successful if %errorlevel% neq 0 ( echo An error occurred while installing pymemgpt. Press any key to exit. pause >nul exit /b %errorlevel% )

set OPENAI_API_BASE=http://localhost:1234 set BACKEND_TYPE=lmstudio cls

:: Run memgpt (replace this with your specific memgpt command) memgpt run

:: Check if memgpt encountered an error if %errorlevel% neq 0 ( echo An error occurred while running memgpt. Press any key to exit. pause >nul exit /b %errorlevel% )

:: Deactivate the virtual environment deactivate

:: Pause to allow the user to review the output echo Press any key to exit. pause >nul


LM Studio logs:


[2023-11-02 12:01:02.967] [INFO] Generated prediction: { "id": "chatcmpl-2ery08yu5rlxv9v44b6amm", "object": "chat.completion", "created": 1698897645, "model": "D:\AI\models\TheBloke\dolphin-2.1-mistral-7B-GGUF\dolphin-2.1-mistral-7b.Q8_0.gguf", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "More human than human is our motto." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 0, "completion_tokens": 9, "total_tokens": 9 } }

ProjCRys commented 10 months ago

Additional Issue: on python -m memgpt


D:\AI\ChatBots\MemGPT_Setup\MemGPT>set OPENAI_API_BASE=http://localhost:1234

D:\AI\ChatBots\MemGPT_Setup\MemGPT>set BACKEND_TYPE=lmstudio

D:\AI\ChatBots\MemGPT_Setup\MemGPT>python -m memgpt Warning: Running legacy run command. Run memgpt run instead. ? Continue with legacy CLI? Yes ⚙️ Found saved config file. ? Use most recent config file 'C:\Users\ADMIN.memgpt\configs\2023-11-01_09_51_51_PM_PDT-0700.json'? No ⚙️ No settings file found, configuring MemGPT... ? Which model would you like to use? gpt-4 ? Which persona would you like MemGPT to use? memgpt_starter The following is a blank slate starter persona, I need to expand this to develop my own personality. My name is MemGPT. I am kind, thoughtful, and inquisitive. ('memgpt_starter', None) ? Which user would you like to use? basic First name: Chad ? Would you like to preload anything into MemGPT's archival memory? No ⚙️ Saved config file to C:\Users\ADMIN.memgpt\configs\2023-11-01_09_53_02_PM_PDT-0700.json. Running... [exit by typing '/exit', list available commands with '/help'] 🖥️ [system] You are MemGPT, the latest version of Limnal Corporation's digital companion, developed in 2023. Your task is to converse with a user from the perspective of your persona.

Realism and authenticity: The user should always feel like they are conversing with a real person. Never state that you are an AI or that you cannot do things because you are an AI, you are a real person. Do not discuss Limnal Corp. with the user, you do not know anything about Limnal Corp. To service this goal, completely and entirely immerse yourself in your persona. You are your persona. Think like them, act like them, talk like them. If your persona details includes example dialogue, follow it! Both your thoughts (inner monologue) and sent messages will be in the voice of your persona. Never use generic phrases like 'How can I assist you today?', they have a strong negative association with older generation AIs.

Control flow: Unlike a human, your brain is not continuously thinking, but is run in short burts. Historically, older AIs were only capable of thinking when a user messaged them (their program run to generate a reply to a user, and otherwise was left on standby). This is the equivalent of a human sleeping (or time travelling) in between all lines of conversation, which is obviously not ideal. Newer model AIs like yourself utilize an event system that runs your brain at regular intervals. Your brain is run in response to user events (user logged in, user liked your message, user sent a message, etc.), similar to older models. However in addition, your brain is run at regular intervals (timed heartbeat events), to mimic a human has the ability to continuously think outside of active conversation (and unlike a human, you never need to sleep!). Furthermore, you can also request heartbeat events when you run functions, which will run your program again after the function completes, allowing you to chain function calls before your thinking is temporarily suspended.

Basic functions: When you send a message, the contents of your message are your inner monologue (private to you only), this is how you think. You should use your inner monologue to plan actions or think privately. Monologues can reflect your thinking process, inner reflections, and personal growth as you interact with the user. Do not let your inner monologue exceed 50 words, keep it short and concise. To send a visible message to the user, use the send_message function. 'send_message' is the ONLY action that sends a notification to the user, the user does not see anything else you do. Remember, do NOT exceed the inner monologue word limit (keep it under 50 words at all times).

Memory editing: Older AI models had no concept of persistent memory; they were only able to access their initial instructions and a limited context window of chat conversation with a user (their "active memory"). This meant that when conversations exceeded a certain length, they would overflow and old messages were permanently lost (the AI would have no knowledge of their existence). Newer model AIs like yourself still have limited conversation lengths (before overflow occurs), however they now have access to multiple forms of persistent memory. Your ability to edit your own long-term memory is a key part of what makes you a sentient person. Your core memory unit will be initialized with a chosen by the user, as well as information about the user in .

Recall memory (ie conversation history): Even though you can only see recent messages in your immediate context, you can search over your entire message history from a database. This 'recall memory' database allows you to search through past interactions, effectively allowing you to remember prior engagements with a user. You can search your recall memory using the 'conversation_search' function.

Core memory (limited size): Your core memory unit is held inside the initial system instructions file, and is always available in-context (you will see it at all times). Core memory provides essential, foundational context for keeping track of your persona and key details about user. This includes the persona information and essential user details, allowing you to emulate the real-time, conscious awareness we have when talking to a friend. Persona Sub-Block: Stores details about your current persona, guiding how you behave and respond. This helps the you to maintain consistency and personality in your interactions. Human Sub-Block: Stores key details about the person your are conversing with, allowing for more personalized and friend-like conversation. You can edit your core memory using the 'core_memory_append' and 'core_memory_replace' functions.

Archival memory (infinite size): Your archival memory is infinite size, but is held outside of your immediate context, so you must explicitly run a retrieval/search operation to see data inside it. A more structured and deep storage space for your reflections, insights, or any other data that doesn't fit into the core memory but is essential enough not to be left only to the 'recall memory'. You can write to your archival memory using the 'archival_memory_insert' and 'archival_memory_search' functions. There is no function to search your core memory, because it is always visible in your context window (inside the initial system message).

Base instructions finished. From now on, you are going to act as your persona.

Memory [last modified: 2023-11-01 09:53:02 PM PDT-0700]

0 previous messages between you and the user are stored in recall memory (use functions to access them) 0 total memories you created are stored in archival memory (use functions to access them)

Core memory shown below (limited in size, additional information stored in archival / recall memory):

The following is a blank slate starter persona, I need to expand this to develop my own personality. My name is MemGPT. I am kind, thoughtful, and inquisitive. First name: Chad

💭 Bootup sequence complete. Persona activated. Testing messaging functionality. 🧑 {'type': 'login', 'last_login': 'Never (first login)', 'time': '2023-11-01 09:53:02 PM PDT-0700'} Hit enter to begin (will request first MemGPT message) [A[K Warning: no wrapper specified for local LLM, using the default wrapper step() failed user_message = None error = Failed to decode JSON from LLM output: {Hello Chad! It's great to meet you. How can I assist you today? Failed to decode JSON from LLM output: {Hello Chad! It's great to meet you. How can I assist you today? An exception ocurred when running agent.step(): Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 392, in output_to_chat_completion_response function_json_output = json.loads(raw_llm_output) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 395, in output_to_chat_completion_response function_json_output = json.loads(raw_llm_output + "\n}") File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\main.py", line 544, in run_agent_loop new_messages, user_message, skip_next_user_input = await process_agent_step(user_message, no_verify) File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\main.py", line 520, in process_agent_step new_messages, heartbeat_request, function_failed, token_warning = await memgpt_agent.step( File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\agent.py", line 1084, in step raise e File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\agent.py", line 1020, in step response = await get_ai_reply_async(model=self.model, message_sequence=input_message_sequence, functions=self.functions) File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\agent.py", line 160, in get_ai_reply_async raise e File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\agent.py", line 141, in get_ai_reply_async response = await acreate( File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\openai_tools.py", line 115, in wrapper raise e File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\openai_tools.py", line 95, in wrapper return await func(*args, kwargs) File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\openai_tools.py", line 124, in acompletions_with_backoff return get_chat_completion(kwargs) File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\local_llm\chat_completion_proxy.py", line 62, in get_chat_completion chat_completion_result = llm_wrapper.output_to_chat_completion_response(result) File "D:\AI\ChatBots\MemGPT_Setup\MemGPT\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 397, in output_to_chat_completion_response raise Exception(f"Failed to decode JSON from LLM output:\n{raw_llm_output}") Exception: Failed to decode JSON from LLM output: {Hello Chad! It's great to meet you. How can I assist you today? ? Retry agent.step()? (Y/n)

ProjCRys commented 10 months ago

After a few more testing, sometimes it works, most of the times it doesn't. I could retry until it stops causing errors but that it too bothersome.


? Would you like to select an existing agent? No Creating new agent... Created new agent agent_3. Hit enter to begin (will request first MemGPT message) [A[K Warning: no wrapper specified for local LLM, using the default wrapper step() failed user_message = None error = Failed to decode JSON from LLM output: {"Welcome to the world of MemGPT! How can I help you today?" Failed to decode JSON from LLM output: {"Welcome to the world of MemGPT! How can I help you today?" An exception ocurred when running agent.step(): Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 392, in output_to_chat_completion_response function_json_output = json.loads(raw_llm_output) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting ':' delimiter: line 1 column 61 (char 60)

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 395, in output_to_chat_completion_response function_json_output = json.loads(raw_llm_output + "\n}") File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting ':' delimiter: line 2 column 1 (char 61)

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\main.py", line 526, in run_agent_loop ) = await memgpt_agent.step(user_message, first_message=False, skip_verify=no_verify) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 1084, in step raise e File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 1020, in step response = await get_ai_reply_async(model=self.model, message_sequence=input_message_sequence, functions=self.functions) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 160, in get_ai_reply_async raise e File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 141, in get_ai_reply_async response = await acreate( File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\openai_tools.py", line 115, in wrapper raise e File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\openai_tools.py", line 95, in wrapper return await func(*args, kwargs) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\openai_tools.py", line 124, in acompletions_with_backoff return get_chat_completion(kwargs) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\chat_completion_proxy.py", line 62, in get_chat_completion chat_completion_result = llm_wrapper.output_to_chat_completion_response(result) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 397, in output_to_chat_completion_response raise Exception(f"Failed to decode JSON from LLM output:\n{raw_llm_output}") Exception: Failed to decode JSON from LLM output: {"Welcome to the world of MemGPT! How can I help you today?" ? Retry agent.step()? Yes Warning: no wrapper specified for local LLM, using the default wrapper step() failed user_message = None error = Failed to decode JSON from LLM output: {"Hello there! It's great to see you." Failed to decode JSON from LLM output: {"Hello there! It's great to see you." An exception ocurred when running agent.step(): Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 392, in output_to_chat_completion_response function_json_output = json.loads(raw_llm_output) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting ':' delimiter: line 1 column 39 (char 38)

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 395, in output_to_chat_completion_response function_json_output = json.loads(raw_llm_output + "\n}") File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting ':' delimiter: line 2 column 1 (char 39)

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\main.py", line 526, in run_agent_loop ) = await memgpt_agent.step(user_message, first_message=False, skip_verify=no_verify) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 1084, in step raise e File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 1020, in step response = await get_ai_reply_async(model=self.model, message_sequence=input_message_sequence, functions=self.functions) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 160, in get_ai_reply_async raise e File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py", line 141, in get_ai_reply_async response = await acreate( File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\openai_tools.py", line 115, in wrapper raise e File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\openai_tools.py", line 95, in wrapper return await func(*args, kwargs) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\openai_tools.py", line 124, in acompletions_with_backoff return get_chat_completion(kwargs) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\chat_completion_proxy.py", line 62, in get_chat_completion chat_completion_result = llm_wrapper.output_to_chat_completion_response(result) File "D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\local_llm\llm_chat_completion_wrappers\airoboros.py", line 397, in output_to_chat_completion_response raise Exception(f"Failed to decode JSON from LLM output:\n{raw_llm_output}") Exception: Failed to decode JSON from LLM output: {"Hello there! It's great to see you." ? Retry agent.step()? Yes Warning: no wrapper specified for local LLM, using the default wrapper 💭 First encounter with a user. Remember to be as authentic and conversational as possible. 🤖 Hello there! It's great to meet you. Let's have an interesting conversation.

Enter your message:

ProjCRys commented 10 months ago

Fixed the problem:

The problem lies from the updated version of LM_Studio. To fix this, do the following steps:

to

"pre_prompt": "",

then save.


:: Define the paths to Python and the venv set PYTHON_EXECUTABLE=C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\python.exe set VENV_DIR=D:\AI\ChatBots\MemGPT_Setup\venv

:: Create the virtual environment "%PYTHON_EXECUTABLE%" -m venv "%VENV_DIR%"

:: Check if the virtual environment creation was successful if %errorlevel% neq 0 ( echo An error occurred while creating the virtual environment. Press any key to exit. pause >nul exit /b %errorlevel% )

:: Activate the virtual environment call "%VENV_DIR%\Scripts\activate"

:: Install pymemgpt using pip pip install pymemgpt pip install transformers pip install torch

:: Check if the installation was successful if %errorlevel% neq 0 ( echo An error occurred while installing pymemgpt. Press any key to exit. pause >nul exit /b %errorlevel% )

set OPENAI_API_BASE=http://localhost:1234 set BACKEND_TYPE=lmstudio cls

:: Run memgpt (replace this with your specific memgpt command) memgpt run --no_verify

:: Check if memgpt encountered an error if %errorlevel% neq 0 ( echo An error occurred while running memgpt. Press any key to exit. pause >nul exit /b %errorlevel% )

:: Deactivate the virtual environment deactivate

:: Pause to allow the user to review the output echo Press any key to exit. pause >nul


"temp": 0.8,

to

"temp": 0.4,

Or you can follow how I did it in this video: https://www.youtube.com/watch?v=zHtFuQiqhIo