Open PranavPuranik opened 5 days ago
This is prompt engineering.
I think each model should have its own prompt. We can use inheritance and tweak one of two things as required.
Please test this with other models. I removed what I felt was double.
Fixes #1971 #2047
Please delete options that are not relevant.
Tested on
from mem0 import Memory config = { "llm": { "provider": "ollama", "config": { "model": "llama3.1:latest", "temperature": 0, "max_tokens": 8000, "ollama_base_url": "http://localhost:11434", # Ensure this URL is correct }, }, "embedder": { "provider": "ollama", "config": { "model": "nomic-embed-text:latest", # Alternatively, you can use "snowflake-arctic-embed:latest" "ollama_base_url": "http://localhost:11434", }, }, "vector_store": { "provider": "qdrant", "config": { "collection_name": "test", "embedding_model_dims": 768, } }, "version": "v1.1", } # Initialize Memory with the configuration m = Memory.from_config(config) # Add a memory m.add("I'm visiting Paris", user_id="john") m.add("I like pizza", user_id="john") # Retrieve memories memories = m.get_all() print(memories)
Description
This is prompt engineering.
I think each model should have its own prompt. We can use inheritance and tweak one of two things as required.
Please test this with other models. I removed what I felt was double.
Fixes #1971 #2047
Type of change
Please delete options that are not relevant.
How Has This Been Tested?
Tested on
Please delete options that are not relevant.
Checklist:
Maintainer Checklist