Closed USTAADCOM closed 2 months ago
To add memory to index.as_query_engine()
in VectorStoreIndex
, you can customize the storage context with a specific vector store. Here is an example using the DeepLakeVectorStore
:
from llama_index.core import (
VectorStoreIndex,
SimpleDirectoryReader,
StorageContext,
)
from llama_index.vector_stores.deeplake import DeepLakeVectorStore
# Construct vector store and customize storage context
storage_context = StorageContext.from_defaults(
vector_store=DeepLakeVectorStore(dataset_path="<dataset_path>")
)
# Load documents and build index
documents = SimpleDirectoryReader("../paul_graham_essay/data").load_data()
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
# Query index
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
This example demonstrates how to integrate a custom vector store into the VectorStoreIndex
and use it for querying [1].
Additionally, the llama_index.core.memory
module includes several memory components such as ChatMemoryBuffer
, ChatSummaryMemoryBuffer
, SimpleComposableMemory
, and VectorMemory
, which can be used to integrate memory functionalities within the LlamaIndex framework [2][3].
To continue talking to Dosu, mention @dosu.
@USTAADCOM This is supported in many ways
We have many chat engines, as well as agents https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/usage_pattern/#available-chat-modes
I would recommend something like
chat_engine = index.as_chat_engine(chat_mode="condense_plus_context", system_prompt="You are a helpful assistant that answers questions about XZY.")
But do check out the docs for this
Feature Description
I try different RAG techniques but VectorStoreIndex with query engine provide very good results. But now there is a problem which i am facing how we can pass memory with query engine. If someone got any solution please share his/her experience.
Reason
No response
Value of Feature
No response