cpacker / MemGPT

Create LLM agents with long-term memory and custom tools 📚🦙
https://memgpt.readme.io
Apache License 2.0
10.9k stars 1.18k forks source link

Quickstart fails every way #955

Closed cleesmith closed 5 months ago

cleesmith commented 5 months ago

Describe the bug It does not work

Please describe your setup

Screenshots ... in the Terminal: memgpt run

? Would you like to select an existing agent? Yes ? Select agent: UnderstandingLighthouse

🔁 Using existing agent UnderstandingLighthouse

Hit enter to begin (will request first MemGPT message)

An exception occurred when running agent.step(): Traceback (most recent call last): File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/main.py", line 356, in run_agent_loop new_messages, user_message, skip_next_user_input = process_agent_step(user_message, no_verify) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/main.py", line 332, in process_agent_step new_messages, heartbeat_request, function_failed, token_warning, tokens_accumulated = memgpt_agent.step( ^^^^^^^^^^^^^^^^^^ File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/agent.py", line 676, in step raise e File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/agent.py", line 596, in step response = self._get_ai_reply( ^^^^^^^^^^^^^^^^^^^ File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/agent.py", line 362, in _get_ai_reply raise e File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/agent.py", line 342, in _get_ai_reply response = create( ^^^^^^^ File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/llm_api_tools.py", line 373, in wrapper raise e File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/llm_api_tools.py", line 346, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/llm_api_tools.py", line 465, in create return get_chat_completion( ^^^^^^^^^^^^^^^^^^^^ File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/local_llm/chat_completion_proxy.py", line 159, in get_chat_completion result, usage = get_vllm_completion(endpoint, auth_type, auth_key, model, prompt, context_window, user) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/cleesmith/anaconda3/envs/memgpt/lib/python3.11/site-packages/memgpt/local_llm/vllm/api.py", line 46, in get_vllm_completion raise Exception( Exception: API call got non-200 response code (code=521, msg=<!DOCTYPE html>

api.memgpt.ai | 521: Web server is down

Web server is down Error code 521

Visit cloudflare.com for more information.
2024-02-04 16:29:12 UTC
You

Browser

Working
Newark

Cloudflare

Working
api.memgpt.ai

Host

Error

What happened?

The web server is not returning a connection. As a result, the web page is not displaying.

What can I do?

If you are a visitor of this website:

Please try again in a few minutes.

If you are the owner of this website:

Contact your hosting provider letting them know your web server is not responding. Additional troubleshooting information.

) for address: https://api.memgpt.ai/v1/completions. Make sure that the vLLM server is running and reachable at https://api.memgpt.ai/v1/completions. ? Retry agent.step()? No

Enter your message:

Cancelled by user

Finished.


...... It seems to always try to go to: https://api.memgpt.ai/v1/completions (mothership?) ... regardless of the "configure" being set to: openai or local. Why? It's never a good impresion when the simple quickstart does not work.

cpacker commented 5 months ago

Hey @cleesmith thanks for opening the issue - our free endpoint was down earlier today but is back online (https://status.memgpt.ai/status/all). Can you confirm if it's working now?

In the future we can try to revise the error messaging to be a little more communicative - eg in this case, it might have helped if we returned / printed an error that looked like:

Error: can't reach the MemGPT hosted endpoint. The MemGPT hosted endpoint may be offline - you can check its current status at https://status.memgpt.ai

cleesmith commented 5 months ago

@cpacker I really like the idea of "memory" along with analyze/chat with documents, but for me it would have to be totally locally ... no openai and no endpoints (unless local). Any leak, opening, endpoint is a point of failure ... "was down". I got the impression that MemGPT was possible local. Or perhaps I'm a rare overly private use case. Thanks.

sarahwooders commented 5 months ago

@cleesmith you can see the docs for instructions on how to run MemGPT with local models https://memgpt.readme.io/docs/local_llm

As @cpacker mentioned, memgpt quickstart is connected to our hosted endpoint (which went down for about a day) since local LLMs require more configuration.

Going to close this issue for now, but let us know if you have other issues/questions!