Memory lets your AI applications learn from each user interaction. It lets them become effective as they adapt to users' personal tastes and even learn from prior mistakes. This template shows you how to build and deploy a long-term memory service that you can connect to from any LangGraph agent so they can manage user-scoped memories.
Create a .env
file.
cp .env.example .env
Set the required API keys in your .env
file.
The defaults values for model
are shown below:
model: anthropic/claude-3-5-sonnet-20240620
Follow the instructions below to get set up, or pick one of the additional options.
To use Anthropic's chat models:
.env
file:ANTHROPIC_API_KEY=your-api-key
To use OpenAI's chat models:
.env
file:OPENAI_API_KEY=your-api-key
Open this template in LangGraph studio to get started and navigate to the chatbot
graph.
If you want to deploy to the cloud, follow these instructions to deploy this repository to LangGraph Cloud and use Studio in your browser.
Try chatting with the bot! It will try to save memories locally (on your desktop) based on the content you tell it. For instance, if you say "Hi I'm will and I like to hike.", it will treat that content as worthy of remembering.
If you pause the conversation for ~10-20 seconds, the long-term-memory graph will start. You can click the "Memories" button at the top of your studio (if you've updated your app to a recent version) to see what's been inferred.
Create a new thread using the +
icon and chat with the bot again.
The bot should have access to the memories you've saved, and will use them to personalize its responses.
An effective memory service should address some key questions:
The "correct" answer to these questions can be application-specific. We'll address these challenges below, and explain how this template lets you flexibly configure what and how memories are managed to keep your bot's memory on-topic and up-to-date. First, we'll talk about how you configure "what each memory should contain" using memory schemas.
Our memory service uses debouncing to store information efficiently. Instead of processing memories every time the user messages your chat bot, which could be costly and redundant, we delay updates.
Here's how debouncing works in this template:
after_seconds
parameter.This method processes memories after a period of inactivity, likely signaling the end of a conversation segment. It balances timely memory formation with computational efficiency, avoiding unnecessary processing during rapid exchanges.
Debouncing allows us to maintain up-to-date memories without overwhelming our system or incurring excessive costs.
See this in the code here: chatbot/graph.py.
Next we need to tell our system what information to track. Memory schemas tell the service the "shape" of individual memories and how to update them. You can define any custom memory schema by providing memory_types
as configuration. Let's review the two default schemas we've provided along the template to get a better sense of what they are doing.
The first schema is the User
profile schema, copied below:
{
"name": "User",
"description": "Update this document to maintain up-to-date information about the user in the conversation.",
"update_mode": "patch",
"parameters": {
"type": "object",
"properties": {
"user_name": {
"type": "string",
"description": "The user's preferred name"
},
"age": {
"type": "integer",
"description": "The user's age"
},
"interests": {
"type": "array",
"items": { "type": "string" },
"description": "A list of the user's interests"
},
"home": {
"type": "string",
"description": "Description of the user's home town/neighborhood, etc."
},
"occupation": {
"type": "string",
"description": "The user's current occupation or profession"
},
"conversation_preferences": {
"type": "array",
"items": { "type": "string" },
"description": "A list of the user's preferred conversation styles, pronouns, topics they want to avoid, etc."
}
}
}
}
The schema has a name and description, as well as JSON schema parameters that are all passed to an LLM. The LLM infers the values for the schema based on the conversations you send to the memory service.
The schema also has an update_mode
parameter that defines how the service should update its memory when new information is provided. The patch update_mode instructs the graph that we should always have a single JSON object to represent this user. We'll describe this in more detail in the patch updates section below.
The second memory schema we provide is the Note schema, shown below:
{
"name": "Note",
"description": "Save notable memories the user has shared with you for later recall.",
"update_mode": "insert",
"parameters": {
"type": "object",
"properties": {
"context": {
"type": "string",
"description": "The situation or circumstance where this memory may be relevant. Include any caveats or conditions that contextualize the memory. For example, if a user shares a preference, note if it only applies in certain situations (e.g., 'only at work'). Add any other relevant 'meta' details that help fully understand when and how to use this memory."
},
"content": {
"type": "string",
"description": "The specific information, preference, or event being remembered."
}
},
"required": ["context", "content"]
}
}
Just like the previous example, this schema has a name, description, and parameters. Notic that the update_mode
this time is "insert". This instructs the LLM in the memory service to insert new memories to the list or update existing ones. The number of memories for this update_mode
is unbound since the model can continue to store new notes any time something interesting shows up in the conversation. Each time the service runs, the model can generate multiple schemas, some to update or re-contextualize existing memories, some to document new information. Note that these memory schemas tend to have fewer parameters and are usually most effective if you have a field to let the service provide contextual information (so that if your bot fetches this memory, it isn't taken out-of-context).
To wrap up this section: memory_schemas
provide a name, description, and parameters that the LLM populates to store in the database. The update_mode
controls whether new information should always overwrite an existing memory or whether it should insert new memories (while optionally updating existing ones).
These schemas are fully customizable! Try extending the above and seeing how it updates memory formation in the studio by passing in via configuration (or defining in an assistant).
In the previous section we showed how the memory schemas define how memories should be updated with new information over time. Let's now turn our attention to how new information is handled. Each update type using tool calling in slightly different ways. We will use the trustcall
library, which we created as a simple interface for generating and continuously updating json documents, to handle all of the cases below:
The "patch" update_mode
defines a memory management strategy that repeatedly updates a single JSON document. When new information is provided, the model generates "patches" - small updates to extend, delete, or replace content in the current memory document. This "patch" update_mode
offers three key benefits:
By defining specific parameters in the schema, we deliberately choose what information is relevant to track, excluding other potentially distracting information. This approach biases the service to focus on what we deem important for our specific application.
The memory update process works as follows:
If no memory exists:
trust_call
prompts the model to populate the document.If a memory already exists:
PatchDoc
toolBy applying updates as JSON patches, we achieve several benefits:
This approach is particularly effective for large, complicated schemas, where LLMs might otherwise forget or omit previously stored details when regenerating information from scratch.
The "insert" update_mode
lets you manage a growing collection of memories or notes, rather than a single, continuously updated document. This approach is particularly useful for tracking multiple, distinct pieces of information that accumulate over time, such as user preferences, important events, or contextual details that may be relevant in future interactions.
When handling memory creation and updates with the "insert" mode, the process works as follows:
When no memories exist:
When memories exist for the user:
This approach allows for flexible memory management, enabling both updates to existing memories and the creation of new ones as needed. The frequency of updates vs. inserts depends both on the LLM you use, the schema descriptions you provide, and on how you prompt the model in context. We encourage you to look at the LangSmith traces the memory graph generates and develop evaluations to strike the right balance of precision and recall.
All these memories need to go somewhere reliable. All LangGraph deployments come with a built-in memory storage layer that you can use to persist information across conversations.
You can learn more about Storage in LangGraph here.
In our case, we are saving all memories namespaced by user_id
and by the memory schema you provide. That way you can easily search for memories for a given user and of a particular type. This diagram shows how these pieces fit together:
The studio uses the LangGraph API as its backend and exposes graph endpoints for all the graphs defied in your langgraph.json
file.
"graphs": {
"chatbot": "./src/chatbot/graph.py:graph",
"memory_graph": "./src/memory_graph/graph.py:graph"
},
You can interact with your server and storage using the studio UI or the LangGraph SDK.
from langgraph_sdk import get_client
client = get_client(url="http:...") # your server
items = await client.store.search_items(namespace)
The separation of concerns between the application logic (chatbot) and the memory (the memory graph) a few advantages:
(1) minimal overhead by removing memory creation logic from the hotpath of the application (e.g., no latency cost for memory creation)
(2) memory creation logic is handled in a background job, separate from the chatbot, with scheduling to avoid duplicate processing
(3) memory graph can be updated and / or hosted (as a service) independently of the application (chatbot)
Here is a schematic of the interaction pattern:
Memory management can be challenging to get right. To make sure your memory_types suit your applications' needs, we recommend starting from an evaluation set, adding to it over time as you find and address common errors in your service.
We have provided a few example evaluation cases in the test file here. As you can see, the metrics themselves don't have to be terribly complicated, especially not at the outset.
We use LangSmith's @unit decorator to sync all the evaluations to LangSmith so you can better optimize your system and identify the root cause of any issues that may arise.
Customize memory memory_types: This memory graph supports two different update_modes
that dictate how memories will be managed:
favorite_locations
:[
{
"name": "User",
"description": "Update this document to maintain up-to-date information about the user in the conversation.",
"update_mode": "patch",
"parameters": {
"type": "object",
"properties": {
"user_name": {
"type": "string",
"description": "The user's preferred name"
},
"age": {
"type": "integer",
"description": "The user's age"
},
"interests": {
"type": "array",
"items": { "type": "string" },
"description": "A list of the user's interests"
},
"home": {
"type": "string",
"description": "Description of the user's home town/neighborhood, etc."
},
"occupation": {
"type": "string",
"description": "The user's current occupation or profession"
},
"conversation_preferences": {
"type": "array",
"items": { "type": "string" },
"description": "A list of the user's preferred conversation styles, pronouns, topics they want to avoid, etc."
},
"favorite_locations": {
"type": "array",
"items": { "type": "string" },
"description": "A list of the user's favorite places or locations"
}
}
}
}
]
If you paste the above in the "Memory Types" configuration in the Studio UI and continue the chat, new memories will be extracted to follow the updated schema.
You can modify existing schemas or provide new ones via configuration to customize the memory structures extracted by the memory graph. Here's how it works:
[
{
"name": "Person",
"description": "Track general information about people the user knows.",
"update_mode": "insert",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name of the person."
},
"relationship": {
"type": "string",
"description": "The relationship between the user and this person (e.g., friend, family, colleague)."
},
"notes": {
"type": "string",
"description": "General notes about this person, including how they met, user's feelings, and recent interactions."
}
},
"required": ["name"]
}
}
]
Since you've made a newly named memory schema, the memory service will save it within a new namespace and not overwrite any previous ones.
You can modify schemas with an insertion update_mode in the same way as schemas with a patch update_mode. Define the structure, name it descriptively, set "update_mode" to "insert", and include a concise description. Parameters should have appropriate data types and descriptions. Consider adding constraints for data quality.
We'd also encourage you to extend this template by adding additional memory types! "Patch" and "insert" are incredibly powerful already, but you could also extend the logic to add more reflection over related memories to build stronger associations between the saved content. Make the code your own!