mem0ai / mem0

The Memory layer for your AI apps
https://mem0.ai
Apache License 2.0
23k stars 2.12k forks source link

Fixing the fact extraction prompt #2037

Open PranavPuranik opened 5 days ago

PranavPuranik commented 5 days ago

Description

This is prompt engineering.

I think each model should have its own prompt. We can use inheritance and tweak one of two things as required.

Please test this with other models. I removed what I felt was double.

Fixes #1971 #2047

Type of change

Please delete options that are not relevant.

How Has This Been Tested?

Tested on


from mem0 import Memory

config = {
    "llm": {
        "provider": "ollama",
        "config": {
            "model": "llama3.1:latest",
            "temperature": 0,
            "max_tokens": 8000,
            "ollama_base_url": "http://localhost:11434",  # Ensure this URL is correct
        },
    },
    "embedder": {
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text:latest",
            # Alternatively, you can use "snowflake-arctic-embed:latest"
            "ollama_base_url": "http://localhost:11434",
        },
    },
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "collection_name": "test",
            "embedding_model_dims": 768,
        }
    },
    "version": "v1.1",
}

# Initialize Memory with the configuration
m = Memory.from_config(config)

# Add a memory
m.add("I'm visiting Paris", user_id="john")
m.add("I like pizza", user_id="john")

# Retrieve memories
memories = m.get_all()
print(memories)

Please delete options that are not relevant.

Checklist:

Maintainer Checklist