I'm encountering a 400 Bad Request error when making a request to the OpenAI Embeddings API. The error message indicates that the input parameter is invalid. Below is the relevant part of the traceback and error message:
ERROR:root
adding memory: Error code: 400 - {'error': {'message': "'$.input' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Traceback (most recent call last):
File "/Users/xxxxx/xxxx/xxxx/chat.py", line 73, in handle_query
self.memory.add(
File "/usr/local/lib/python3.12/site-packages/mem0/memory/main.py", line 159, in add
function_result = function_to_call(**function_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/mem0/memory/main.py", line 464, in _create_memory_tool
embeddings = self.embedding_model.embed(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/mem0/embeddings/openai.py", line 32, in embed
self.client.embeddings.create(input=[text], model=self.config.model)
File "/usr/local/lib/python3.12/site-packages/openai/resources/embeddings.py", line 114, in create
return self._post(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1266, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 942, in request
return self._request(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "'$.input' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
This is what am doing in the code and am running this code: python3 chat.py "how old am i?" "you are a support agent" "you reply well" 1
def handle_query(self, system_prompt, query, user_id=None):
"""
Handle a customer query and store the relevant information in memory.
:param system_prompt: The system prompt to use for the AI.
:param query: The customer query to handle.
:param user_id: Optional user ID to associate with the memory.
"""
try:
previous_memories = self.search_memories(query, user_id)
prompt = query
if previous_memories:
prompt = f"User input: {query}\nPrevious memories: {previous_memories}"
# Start a chat completion request to the AI
response = self.client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt},
],
)
# Attempt to store the query in memory
try:
self.memory.add(
query, user_id=user_id, metadata={"app_id": self.app_id}
)
except Exception as mem_error:
logging.error(f"Error adding memory: {str(mem_error)}", exc_info=True)
# Continue execution even if memory storage fails
return response.choices[0].message.content
except Exception as e:
logging.error(f"Error in handle_query: {str(e)}", exc_info=True)
return (
"I'm sorry, but I encountered an error while processing your request."
)
I modified the model to gpt-4o now i get the error message that i should modify my prompt
ERROR:root:Error adding memory: Error code: 400 - {'error': {'message': "Failed to call a function. Please adjust your prompt. See 'failed_generation' for more details.", 'type': 'invalid_request_error', 'code': 'tool_use_failed', 'failed_generation': '<tool-use>{"tool_calls": [{"id": "pending", "type": "function", "function": {"name": "add_memory"}, "parameters": {}}]}</tool-use>'}}
Traceback (most recent call last):
File "/Users/emmanuelketchabepa/Herd/whatsocial/chat.py", line 75, in handle_query
self.memory.add(
File "/usr/local/lib/python3.12/site-packages/mem0/memory/main.py", line 136, in add
response = self.llm.generate_response(messages=messages, tools=tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/mem0/llms/groq.py", line 89, in generate_response
response = self.client.chat.completions.create(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/groq/resources/chat/completions.py", line 289, in create
return self._post(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/groq/_base_client.py", line 1225, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/groq/_base_client.py", line 920, in request
return self._request(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/groq/_base_client.py", line 1018, in _request
raise self._make_status_error_from_response(err.response) from None
groq.BadRequestError: Error code: 400 - {'error': {'message': "Failed to call a function. Please adjust your prompt. See 'failed_generation' for more details.", 'type': 'invalid_request_error', 'code': 'tool_use_failed', 'failed_generation': '<tool-use>{"tool_calls": [{"id": "pending", "type": "function", "function": {"name": "add_memory"}, "parameters": {}}]}</tool-use>'}}
I can't determine your exact age from this interaction. However, if you have a specific question or need assistance with something else, feel free to ask!
🐛 Describe the bug
I'm encountering a
400 Bad Request
error when making a request to the OpenAI Embeddings API. The error message indicates that theinput
parameter is invalid. Below is the relevant part of the traceback and error message:This is what am doing in the code and am running this code:
python3 chat.py "how old am i?" "you are a support agent" "you reply well" 1
I modified the model to
gpt-4o
now i get the error message that i should modify my prompt