cpacker / MemGPT

Create LLM agents with long-term memory and custom tools šŸ“ššŸ¦™
https://memgpt.readme.io
Apache License 2.0
10.77k stars 1.16k forks source link

Using openai as a backend failing to auth due to null api_key passed to openai_chat_completions_request() #832

Closed bs7280 closed 5 months ago

bs7280 commented 5 months ago

Auth error trying to make initial call to openai. First time tryiing to use openai for LLM backend. Appears api key is not being sent.

I used a breakpoint before line 191 in openai_tools.py (right before post request) and noticed that the auth token value is None

headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer None'}

As a temporary solution I added this to the start of the openai_chat_completions_request(url, api_key, data) method and it works

import os
api_key = os.environ.get("OPENAI_KEY", api_key)

Please describe your setup

Screenshots See logs below

Config:

memgpt configure
? Select LLM inference provider: openai
? Enter your OpenAI API key (starts with 'sk-', see https://platform.openai.com/api-keys): sk-<REDACTED>
? Override default endpoint: https://api.openai.com/v1
? Select default model (recommended: gpt-4): gpt-3.5-turbo-16k
? Select embedding provider: openai
? Select default preset: memgpt_chat
? Select default persona: sam_simple_pov_gpt35
? Select default human: basic
? Select storage backend for archival data: local

Eror message:

  File "/Users/USERNAME/code/py_memgpt_helloworld/venv/lib/python3.11/site-packages/memgpt/openai_tools.py", line 334, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/USERNAME/code/py_memgpt_helloworld/venv/lib/python3.11/site-packages/memgpt/openai_tools.py", line 384, in create
    return openai_chat_completions_request(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/USERNAME/code/py_memgpt_helloworld/venv/lib/python3.11/site-packages/memgpt/openai_tools.py", line 201, in openai_chat_completions_request
    raise http_err
  File "/Users/USERNAME/code/py_memgpt_helloworld/venv/lib/python3.11/site-packages/memgpt/openai_tools.py", line 193, in openai_chat_completions_request
    response.raise_for_status()  # Raises HTTPError for 4XX/5XX status
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/USERNAME/code/py_memgpt_helloworld/venv/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://api.openai.com/v1/chat/completions

If you're not using OpenAI, please provide additional information on your local LLM setup:

Edit: removed details about LocalLLMs

cpacker commented 5 months ago

This is unfortunate bug in 0.2.11 where the key doesn't get set during configuration.

You can set it manually if you open up the ~/.memgpt/config file (you can open the folder where this file exists with memgpt folder. Then, add these line chunk that says [openai] to the file:

[model]
model = gpt-4
model_endpoint = https://api.openai.com/v1
model_endpoint_type = openai
context_window = 8192

[openai]
key = sk-....

[embedding]
embedding_endpoint_type = openai
embedding_endpoint = https://api.openai.com/v1
embedding_dim = 1536
embedding_chunk_size = 300

You may also be able to set it properly with memgpt quickstart --backend openai (but I'm not completely sure).

This will be fixed in the 0.2.12 update.