cpacker / MemGPT

Create LLM agents with long-term memory and custom tools 📚🦙
https://memgpt.readme.io
Apache License 2.0
11.38k stars 1.24k forks source link

Cannot save agent conversations. #279

Closed ProjCRys closed 10 months ago

ProjCRys commented 10 months ago

OS: Windows 10


File I ran (creates a venv and auto-install and run memgpt): @echo off

:: Define the paths to Python and the venv set PYTHON_EXECUTABLE=C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\python.exe set VENV_DIR=D:\AI\ChatBots\MemGPT_Setup\venv

:: Create the virtual environment "%PYTHON_EXECUTABLE%" -m venv "%VENV_DIR%"

:: Check if the virtual environment creation was successful if %errorlevel% neq 0 ( echo An error occurred while creating the virtual environment. Press any key to exit. pause >nul exit /b %errorlevel% )

:: Activate the virtual environment call "%VENV_DIR%\Scripts\activate"

:: Install pymemgpt using pip pip install pymemgpt pip install transformers pip install torch

:: Check if the installation was successful if %errorlevel% neq 0 ( echo An error occurred while installing pymemgpt. Press any key to exit. pause >nul exit /b %errorlevel% )

set OPENAI_API_BASE=http://localhost:1234 set BACKEND_TYPE=lmstudio cls

:: Run memgpt (replace this with your specific memgpt command) memgpt run --no_verify

:: Check if memgpt encountered an error if %errorlevel% neq 0 ( echo An error occurred while running memgpt. Press any key to exit. pause >nul exit /b %errorlevel% )

:: Deactivate the virtual environment deactivate

:: Pause to allow the user to review the output echo Press any key to exit. pause >nul


Output:


? Would you like to select an existing agent? No Creating new agent... Created new agent agent_4. Hit enter to begin (will request first MemGPT message) [A[K Warning: no wrapper specified for local LLM, using the default wrapper 💭 User logged in. Updating core memory. 🤖 Hello CRys! I'm Nana, how are you today?

Enter your message: Hello, I'm doing fine. Thank you very much [A[KWarning: no wrapper specified for local LLM, using the default wrapper 💭 User responded to greeting. 🤖 I am thrilled to hear that, CRys! What do you want me to call you instead of 'you'? Enter your message: Just simple call me CRys. Nice and simple. [A[KWarning: no wrapper specified for local LLM, using the default wrapper 💭 CRys requested to be called by their name. 🤖 It's a pleasure to meet you, CRys! I hope we can have a wonderful time together! Enter your message: /save [A[K╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\cli\cli.py:157 in run │ │ │ │ 154 │ │ configure_azure_support() │ │ 155 │ │ │ 156 │ loop = asyncio.get_event_loop() │ │ ❱ 157 │ loop.run_until_complete(run_agent_loop(memgpt_agent, first, no_verify, config)) # T │ │ 158 │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ agent = None │ │ │ │ agent_config = AgentConfig() │ │ │ │ agent_files = ['agent_1', 'agent_2', 'agent_3'] │ │ │ │ agents = ['agent_1', 'agent_2', 'agent_3'] │ │ │ │ config = MemGPTConfig( │ │ │ │ │ config_path='C:\Users\ADMIN\.memgpt/config', │ │ │ │ │ anon_clientid='00000000000000000000581122a07091', │ │ │ │ │ preset='memgpt_chat', │ │ │ │ │ model_endpoint='http://localhost:1234', │ │ │ │ │ model='gpt-4', │ │ │ │ │ openai_key='http://localhost:1234/v1', │ │ │ │ │ azure_key=None, │ │ │ │ │ azure_endpoint=None, │ │ │ │ │ azure_version=None, │ │ │ │ │ azure_deployment=None, │ │ │ │ │ azure_embedding_deployment=None, │ │ │ │ │ default_persona='memgpt_starter', │ │ │ │ │ default_human='basic', │ │ │ │ │ default_agent=None, │ │ │ │ │ embedding_model='openai', │ │ │ │ │ embedding_dim=768, │ │ │ │ │ embedding_chunk_size=300, │ │ │ │ │ archival_storage_type='local', │ │ │ │ │ archival_storage_path=None, │ │ │ │ │ archival_storage_uri=None, │ │ │ │ │ recall_storage_type='local', │ │ │ │ │ recall_storage_path=None, │ │ │ │ │ recall_storage_uri=None, │ │ │ │ │ persistence_manager_type=None, │ │ │ │ │ persistence_manager_save_file=None, │ │ │ │ │ persistence_manager_uri=None │ │ │ │ ) │ │ │ │ data_source = None │ │ │ │ debug = False │ │ │ │ embed_model = HuggingFaceEmbedding( │ │ │ │ │ model_name='BAAI/bge-small-en-v1.5', │ │ │ │ │ embed_batch_size=10, │ │ │ │ │ callback_manager=<llama_index.callbacks.base.CallbackManager │ │ │ │ object at 0x000002498268F5B0>, │ │ │ │ │ tokenizer_name='BAAI/bge-small-en-v1.5', │ │ │ │ │ max_length=512, │ │ │ │ │ pooling=<Pooling.CLS: 'cls'>, │ │ │ │ │ normalize='True', │ │ │ │ │ query_instruction=None, │ │ │ │ │ text_instruction=None, │ │ │ │ │ cache_folder=None │ │ │ │ ) │ │ │ │ first = False │ │ │ │ human = 'CRys' │ │ │ │ loop = │ │ │ │ memgpt_agent = <memgpt.agent.AgentAsync object at 0x00000249848C2E90> │ │ │ │ model = None │ │ │ │ no_verify = True │ │ │ │ original_stdout = <colorama.ansitowin32.StreamWrapper object at 0x000002490ACA9720> │ │ │ │ persistence_manager = <memgpt.persistence_manager.LocalStateManager object at │ │ │ │ 0x00000249848C2380> │ │ │ │ persona = 'Nana' │ │ │ │ preset = None │ │ │ │ run_agent_loop = <function run_agent_loop at 0x000002490206E0E0> │ │ │ │ select_agent = False │ │ │ │ service_context = ServiceContext( │ │ │ │ │ llm_predictor=LLMPredictor( │ │ │ │ │ │ system_prompt=None, │ │ │ │ │ │ query_wrapper_prompt=None, │ │ │ │ │ │ pydantic_program_mode=<PydanticProgramMode.DEFAULT: 'default'> │ │ │ │ │ ), │ │ │ │ │ prompt_helper=PromptHelper( │ │ │ │ │ │ context_window=3900, │ │ │ │ │ │ num_output=256, │ │ │ │ │ │ chunk_overlap_ratio=0.1, │ │ │ │ │ │ chunk_size_limit=None, │ │ │ │ │ │ separator=' ' │ │ │ │ │ ), │ │ │ │ │ embed_model=HuggingFaceEmbedding( │ │ │ │ │ │ model_name='BAAI/bge-small-en-v1.5', │ │ │ │ │ │ embed_batch_size=10, │ │ │ │ │ │ callback_manager=<llama_index.callbacks.base.CallbackManager │ │ │ │ object at 0x000002498268F5B0>, │ │ │ │ │ │ tokenizer_name='BAAI/bge-small-en-v1.5', │ │ │ │ │ │ max_length=512, │ │ │ │ │ │ pooling=<Pooling.CLS: 'cls'>, │ │ │ │ │ │ normalize='True', │ │ │ │ │ │ query_instruction=None, │ │ │ │ │ │ text_instruction=None, │ │ │ │ │ │ cache_folder=None │ │ │ │ │ ), │ │ │ │ │ node_parser=SimpleNodeParser( │ │ │ │ │ │ text_splitter=SentenceSplitter( │ │ │ │ │ │ │ chunk_size=300, │ │ │ │ │ │ │ chunk_overlap=20, │ │ │ │ │ │ │ separator=' ', │ │ │ │ │ │ │ paragraph_separator='\n\n\n', │ │ │ │ │ │ │ secondary_chunking_regex='[^,.;。?!]+[,.;。?!]?', │ │ │ │ │ │ │ chunking_tokenizer_fn=<function │ │ │ │ split_by_sentence_tokenizer..split at 0x0000024983959D80>, │ │ │ │ │ │ │ │ │ │ │ callback_manager=<llama_index.callbacks.base.CallbackManager object at │ │ │ │ 0x000002498268F5B0>, │ │ │ │ │ │ │ tokenizer=functools.partial(<bound method Encoding.encode │ │ │ │ of <Encoding 'gpt2'>>, allowed_special='all') │ │ │ │ │ │ ), │ │ │ │ │ │ include_metadata=True, │ │ │ │ │ │ include_prev_next_rel=True, │ │ │ │ │ │ metadata_extractor=None, │ │ │ │ │ │ callback_manager=<llama_index.callbacks.base.CallbackManager │ │ │ │ object at 0x000002498268F5B0> │ │ │ │ │ ), │ │ │ │ │ llama_logger=<llama_index.logger.base.LlamaLogger object at │ │ │ │ 0x00000249848AF1C0>, │ │ │ │ │ callback_manager=<llama_index.callbacks.base.CallbackManager │ │ │ │ object at 0x000002498268F5B0> │ │ │ │ ) │ │ │ │ yes = False │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py:646 in │ │ run_until_complete │ │ │ │ 643 │ │ if not future.done(): │ │ 644 │ │ │ raise RuntimeError('Event loop stopped before Future completed.') │ │ 645 │ │ │ │ ❱ 646 │ │ return future.result() │ │ 647 │ │ │ 648 │ def stop(self): │ │ 649 │ │ """Stop running the event loop. │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ future = <Task finished name='Task-10' coro=<run_agent_loop() done, defined at │ │ │ │ D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\main.py:364> │ │ │ │ exception=TypeError("'NoneType' object is not callable")> │ │ │ │ new_task = True │ │ │ │ self = │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\main.py:440 in run_agent_loop │ │ │ │ 437 │ │ │ │ │ │ memgpt_agent.save() │ │ 438 │ │ │ │ │ │ break │ │ 439 │ │ │ │ │ elif user_input.lower() == "/save" or user_input.lower() == "/savech │ │ ❱ 440 │ │ │ │ │ │ memgpt_agent.save() │ │ 441 │ │ │ │ │ │ continue │ │ 442 │ │ │ │ │ │ 443 │ │ │ │ if user_input.lower() == "/load" or user_input.lower().startswith("/load │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ cfg = MemGPTConfig( │ │ │ │ │ config_path='C:\Users\ADMIN\.memgpt/config', │ │ │ │ │ anon_clientid='00000000000000000000581122a07091', │ │ │ │ │ preset='memgpt_chat', │ │ │ │ │ model_endpoint='http://localhost:1234', │ │ │ │ │ model='gpt-4', │ │ │ │ │ openai_key='http://localhost:1234/v1', │ │ │ │ │ azure_key=None, │ │ │ │ │ azure_endpoint=None, │ │ │ │ │ azure_version=None, │ │ │ │ │ azure_deployment=None, │ │ │ │ │ azure_embedding_deployment=None, │ │ │ │ │ default_persona='memgpt_starter', │ │ │ │ │ default_human='basic', │ │ │ │ │ default_agent=None, │ │ │ │ │ embedding_model='openai', │ │ │ │ │ embedding_dim=768, │ │ │ │ │ embedding_chunk_size=300, │ │ │ │ │ archival_storage_type='local', │ │ │ │ │ archival_storage_path=None, │ │ │ │ │ archival_storage_uri=None, │ │ │ │ │ recall_storage_type='local', │ │ │ │ │ recall_storage_path=None, │ │ │ │ │ recall_storage_uri=None, │ │ │ │ │ persistence_manager_type=None, │ │ │ │ │ persistence_manager_save_file=None, │ │ │ │ │ persistence_manager_uri=None │ │ │ │ ) │ │ │ │ counter = 3 │ │ │ │ first = False │ │ │ │ function_failed = False │ │ │ │ heartbeat_request = None │ │ │ │ legacy = False │ │ │ │ memgpt_agent = <memgpt.agent.AgentAsync object at 0x00000249848C2E90> │ │ │ │ multiline_input = False │ │ │ │ new_messages = [ │ │ │ │ │ { │ │ │ │ │ │ 'role': 'user', │ │ │ │ │ │ 'content': '{"type": "user_message", "message": "Just simple │ │ │ │ call me CRys. Nice and simple."'+44 │ │ │ │ │ }, │ │ │ │ │ { │ │ │ │ │ │ 'role': 'assistant', │ │ │ │ │ │ 'content': 'CRys requested to be called by their name.', │ │ │ │ │ │ 'function_call': { │ │ │ │ │ │ │ 'name': 'send_message', │ │ │ │ │ │ │ 'arguments': '{"message": "It\'s a pleasure to meet you, │ │ │ │ CRys! I hope we can have a wonderful t'+15 │ │ │ │ │ │ } │ │ │ │ │ }, │ │ │ │ │ { │ │ │ │ │ │ 'role': 'function', │ │ │ │ │ │ 'name': 'send_message', │ │ │ │ │ │ 'content': '{"status": "OK", "message": null, "time": │ │ │ │ "2023-11-03 05:00:21 AM PDT-0700"}' │ │ │ │ │ } │ │ │ │ ] │ │ │ │ no_verify = True │ │ │ │ skip_next_user_input = False │ │ │ │ status = <rich.status.Status object at 0x0000024984AAD4B0> │ │ │ │ token_warning = False │ │ │ │ USER_GOES_FIRST = False │ │ │ │ user_input = '/save' │ │ │ │ user_message = '{"type": "user_message", "message": "Just simple call me CRys. Nice │ │ │ │ and simple."'+44 │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\agent.py:359 in save │ │ │ │ 356 │ │ # save the persistence manager too │ │ 357 │ │ filename = f"{timestamp}.persistence.pickle" │ │ 358 │ │ os.makedirs(self.config.save_persistence_manager_dir(), exist_ok=True) │ │ ❱ 359 │ │ self.persistence_manager.save(os.path.join(self.config.save_persistencemanager │ │ 360 │ │ │ 361 │ @classmethod │ │ 362 │ def load_agent(cls, interface, agent_config: AgentConfig): │ │ │ │ ╭────────────────────────────── locals ───────────────────────────────╮ │ │ │ agent_name = 'agent_4' │ │ │ │ filename = '2023-11-03_05_00_25_AM_PDT-0700.persistence.pickle' │ │ │ │ self = <memgpt.agent.AgentAsync object at 0x00000249848C2E90> │ │ │ │ timestamp = '2023-11-03_05_00_25_AM_PDT-0700' │ │ │ ╰─────────────────────────────────────────────────────────────────────╯ │ │ │ │ D:\AI\ChatBots\MemGPT_Setup\venv\lib\site-packages\memgpt\persistence_manager.py:133 in save │ │ │ │ 130 │ │ │ # TODO: fix this hacky solution to pickle the retriever │ │ 131 │ │ │ self.archival_memory.save() │ │ 132 │ │ │ self.archival_memory = None │ │ ❱ 133 │ │ │ pickle.dump(self, fh, protocol=pickle.HIGHEST_PROTOCOL) │ │ 134 │ │ │ │ │ 135 │ │ │ # re-load archival (TODO: dont do this) │ │ 136 │ │ │ self.archival_memory = LocalArchivalMemory(agent_config=self.agent_config) │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ fh = <_io.BufferedWriter │ │ │ │ name='C:\Users\ADMIN\.memgpt\agents\agent_4\persistence_manager\2023-11-0… │ │ │ │ filename = 'C:\Users\ADMIN\.memgpt\agents\agent_4\persistence_manager\2023-11-0305… │ │ │ │ self = <memgpt.persistence_manager.LocalStateManager object at 0x00000249848C2380> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ TypeError: 'NoneType' object is not callable An error occurred while running memgpt. Press any key to exit.

ProjCRys commented 10 months ago

Fixed it:

@echo off setlocal

REM Define the paths set VENV_PATH=D:\AI\ChatBots\MemGPT_Setup\venv set MEMGPT_PATH=D:\AI\ChatBots\MemGPT_Setup\MemGPT set PYTHON_PATH=C:\Users\ADMIN\AppData\Local\Programs\Python\Python310\python.exe

REM Create the virtual environment "%PYTHON_PATH%" -m venv "%VENV_PATH%"

if errorlevel 1 ( echo Error: Unable to create the virtual environment. Press any key to exit. pause >nul exit /b )

REM Activate the virtual environment call "%VENV_PATH%\Scripts\activate"

if errorlevel 1 ( echo Error: Unable to activate the virtual environment. Press any key to exit. pause >nul exit /b )

REM Install required dependencies from the correct directory pip install -r "%MEMGPT_PATH%\requirements-local.txt" set OPENAI_API_BASE=http://localhost:1234 set BACKEND_TYPE=lmstudio

if errorlevel 1 ( echo Error: Failed to install dependencies. Press any key to exit. pause >nul exit /b )

REM Run the 'python -m memgpt' command python MemGPT/main.py

if errorlevel 1 ( echo Error: 'python -m memgpt' encountered an error. Press any key to exit. pause >nul exit /b )

REM Deactivate the virtual environment deactivate

REM Wait for the user to press any key before exiting pause >nul

epsil0nzero commented 10 months ago

Hi @ProjCRys. I experience the same error (TypeError: 'NoneType' object is not callable) when i want to use /save command. I looked into your script and it seems i've done everything correctly. can you explain how did you get rid of the error?

image

ProjCRys commented 10 months ago

I pip installed requirements-local.txt and that somehow worked for me. I then run the bat file to create a venv so that it so that it wont somhow mess up my computer.

Hi @ProjCRys. I experience the same error (TypeError: 'NoneType' object is not callable) when i want to use /save command. I looked into your script and it seems i've done everything correctly. can you explain how did you get rid of the error?

image