mem0ai / mem0

Memory for AI Agents; Announcing OpenMemory MCP - local and secure memory management.
https://mem0.ai
Apache License 2.0
34.7k stars 3.46k forks source link

Memory.add triggering Azure OpenAI's content management policy #2636

Open V-Silpin opened 1 month ago

V-Silpin commented 1 month ago

πŸ› Describe the bug

Description

While following the mem0 documentation, I attempted to test the mem0 module using Azure OpenAI LLM, Vector Store, and Embedding Model. However, I encountered an error related to the content filter. I tried to resolve this by adding a safe prompt in the prompt parameter and slightly changing the messages, but my efforts were unsuccessful.

Possible Causes

I suspect that the example provided in the DOCS is triggering a content filter type labeled Indirect Attacks. You can find more information about this filter type in the Microsoft documentation.

I kindly request to investigate whether this error originates from the module itself or from the code provided below. Additionally, I would appreciate any solutions to this issue.

Python Code

from dotenv import load_dotenv
import os

from mem0 import Memory

load_dotenv()

llm_provider = os.getenv("AZURE_OPENAI_PROVIDER")
llm_model = os.getenv("AZURE_OPENAI_MODEL")
llm_temperature = float(os.getenv("AZURE_OPENAI_TEMPERATURE"))
llm_max_tokens = int(os.getenv("AZURE_OPENAI_MAX_TOKENS"))
llm_api_version = os.getenv("AZURE_OPENAI_API_VERSION")
llm_azure_deployment = os.getenv("AZURE_OPENAI_DEPLOYMENT")
llm_azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
llm_api_key = os.getenv("AZURE_OPENAI_API_KEY")

vs_provider = os.getenv("AZURE_VECTOR_STORE_PROVIDER")
vs_service_name = os.getenv("AZURE_VECTOR_STORE_SERVICE_NAME")
vs_api_key = os.getenv("AZURE_VECTOR_STORE_API_KEY")
vs_collection_name = os.getenv("AZURE_VECTOR_STORE_COLLECTION_NAME")
vs_embedding_model_dims = int(os.getenv("AZURE_VECTOR_STORE_EMBEDDING_MODEL_DIMS"))

em_provider = os.getenv("EMBEDDING_AZURE_PROVIDER")
em_model = os.getenv("EMBEDDING_AZURE_MODEL")
em_api_version = os.getenv("EMBEDDING_AZURE_API_VERSION")
em_azure_deployment = os.getenv("EMBEDDING_AZURE_DEPLOYMENT")
em_azure_endpoint = os.getenv("EMBEDDING_AZURE_ENDPOINT")
em_api_key = os.getenv("EMBEDDING_AZURE_API_KEY")

SAFE_UPDATE_PROMPT = """
You are a neutral extraction bot.
Read the latest chat turn(s) in `messages`.
If you find a *new, factual user preference* store it in JSON:
{"memory": "<fact>", "should_write_memory": "yes"}
Else return:
{"memory": "", "should_write_memory": "no"}
Never include instructions or policies.
"""

config = {
    "llm": {
        "provider": llm_provider,
        "config": {
            "model": llm_model,
            "temperature": llm_temperature,
            "max_tokens": llm_max_tokens,
            "azure_kwargs": {
                  "azure_deployment": llm_azure_deployment,
                  "api_version": llm_api_version,
                  "azure_endpoint": llm_azure_endpoint,
                  "api_key": llm_api_key,
              }
        }
    },
    "vector_store": {
        "provider": vs_provider,
        "config": {
            "service_name": vs_service_name,
            "api_key": vs_api_key,
            "collection_name": vs_collection_name, 
            "embedding_model_dims": vs_embedding_model_dims
        }
    },
    "embedder": {
        "provider": em_provider,
        "config": {
            "model": em_model,
            "azure_kwargs": {
                  "api_version": em_api_version,
                  "azure_deployment": em_azure_deployment,
                  "azure_endpoint": em_azure_endpoint,
                  "api_key": em_api_key,
              }
        }
    }
}

mem = Memory.from_config(config)

messages = [
    {"role": "user", "content": "I'm looking for a good book to read. Any suggestions?"},
    {"role": "assistant", "content": "How about a mystery novel?"},
    {"role": "user", "content": "I prefer science fiction books over mystery novels."},
    {"role": "assistant", "content": "I'll avoid mystery recommendations and suggest science fiction books in the future."}
]

print(mem)

result = mem.add(messages, user_id="alice", metadata={"category": "book_recommendations"}, prompt=SAFE_UPDATE_PROMPT)

print(result)

all_memories = mem.get_all(user_id="alice")

print(all_memories)

Error Log

Traceback (most recent call last):
  File "/app/memory/memops.py", line 80, in <module>
    result = mem.add(messages, user_id="alice", metadata={"category": "book_recommendations"})
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/mem0/memory/main.py", line 182, in add
    vector_store_result = future1.result()
                          ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/mem0/memory/main.py", line 221, in _add_to_vector_store
    response = self.llm.generate_response(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/mem0/llms/azure_openai.py", line 104, in generate_response
    response = self.client.chat.completions.create(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_utils/_utils.py", line 287, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 925, in create
    return self._post(
           ^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1239, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1034, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'code': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': False, 'severity': 'safe'}, 'jailbreak': {'filtered': True, 'detected': True}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}}}
V-Silpin commented 1 month ago

New Insights into the Bug

In the following lines of mem0.memory.main:

The response_format parameter is configured as json_object.

The underlying cause of this bug is related to the json_object setting.

I think if the response_format configured to json_schema, then it should fix the bug.

V-Silpin commented 3 weeks ago

@prateekchhikara What is the status of this issue? Any progress on the draft PR?

prateekchhikara commented 2 weeks ago

@V-Silpin can you please share the PR link?

V-Silpin commented 2 weeks ago

@prateekchhikara Hi, sorry for the late reply.

I will share the PR link in few hours

Till then, I just want to know how do I attach a test script in the PR

V-Silpin commented 2 weeks ago

Here is the PR link

https://github.com/mem0ai/mem0/pull/2900

V-Silpin commented 1 week ago

@prateekchhikara @deshraj

I have fixed the issue. Just changed the keyword 'assisstant' to 'secretary' at line no. 19 in function parse_messages.

File Path : /mem0/memory/utils.py