I am trying to run memgpt with a 4K context "lizpreciatior/lzlv_70b_fp16_hf" model to utlize the infinite context of memgpt but running into issues.
Now basically, my below code works fine but i just don't know how much I would have to modify the memgpt code to make it work ?
I tried various combinations in openai_tools.py file but it seems there are too many changes ? and getting error ?
An exception ocurred when running agent.step():
Traceback (most recent call last):
File "/data/localLLM/.venv/lib/python3.10/site-packages/memgpt/main.py", line 263, in run_agent_loop
new_messages, user_message, skip_next_user_input = process_agent_step(user_message, no_verify)
File "/data/localLLM/.venv/lib/python3.10/site-packages/memgpt/main.py", line 239, in process_agent_step
new_messages, heartbeat_request, function_failed, token_warning = memgpt_agent.step(
File "/data/localLLM/.venv/lib/python3.10/site-packages/memgpt/agent.py", line 674, in step
raise e
File "/data/localLLM/.venv/lib/python3.10/site-packages/memgpt/agent.py", line 594, in step
response = self.get_ai_reply(
File "/data/localLLM/.venv/lib/python3.10/site-packages/memgpt/agent.py", line 794, in get_ai_reply
raise e
File "/data/localLLM/.venv/lib/python3.10/site-packages/memgpt/agent.py", line 777, in get_ai_reply
response = create(
File "/data/localLLM/.venv/lib/python3.10/site-packages/memgpt/openai_tools.py", line 291, in wrapper
return func(*args, **kwargs)
File "/data/localLLM/.venv/lib/python3.10/site-packages/memgpt/openai_tools.py", line 342, in create
return openai_chat_completions_request(
File "/data/localLLM/.venv/lib/python3.10/site-packages/memgpt/openai_tools.py", line 117, in openai_chat_completions_request
raise http_err
File "/data/localLLM/.venv/lib/python3.10/site-packages/memgpt/openai_tools.py", line 105, in openai_chat_completions_request
response.raise_for_status() # Raises HTTPError for 4XX/5XX status
File "/data/localLLM/.venv/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 422 Client Error: Unprocessable Entity for url: https://api.deepinfra.com/v1/openai/chat/completions
Here is the working code:
import openai
stream = True # or False
# Point OpenAI client to our endpoint
openai.api_key = "xxxxxxxxxxxxxxxxxxx"
openai.api_base = "https://api.deepinfra.com/v1/openai"
MODEL_DI = "lizpreciatior/lzlv_70b_fp16_hf"
prompt = '''NEVER SAY THAT YOU ARE AN AI.'''
# Initialize an empty list to store the conversation history
conversation_history = []
conversation_history.append({"role": "system", "content": prompt})
while True:
user_input = input('You:')
# Add user's message to the conversation history
conversation_history.append({"role": "user", "content": user_input})
chat_completion = openai.ChatCompletion.create(
model=MODEL_DI,
messages=conversation_history,
stream=False,
max_tokens=200,
)
# Extract the response and add it to the conversation history
response = chat_completion['choices'][0]['message']['content']
tokens_used = chat_completion['usage']['total_tokens']
conversation_history.append({"role": "assistant", "content": response})
print (response)
print(f"Tokens used: {tokens_used}\n")
print ('----------------------------')
seems memgpt is dead project... Thankfully haven't written much of the code to waste my cycles here. Feel free to close this topic (which I am sure will be done in another minute or so)
I am trying to run memgpt with a 4K context "lizpreciatior/lzlv_70b_fp16_hf" model to utlize the infinite context of memgpt but running into issues. Now basically, my below code works fine but i just don't know how much I would have to modify the memgpt code to make it work ? I tried various combinations in openai_tools.py file but it seems there are too many changes ? and getting error ?
Here is the working code: