Open Ninlawat-Puhu opened 2 months ago
To store and retrieve your index_policy
and index_leave
indexes in Redis, including using an ExactMatchFilter for index_leave
, you'll need to integrate Redis functionality into your VectorStoreIndex
instances. This involves using the RedisVectorStore
class from the LlamaIndex framework. Here's a step-by-step guide:
RedisVectorStore
instances for both your indexes. Specify the index name, Redis URL, and any other relevant configurations.from llama_index.legacy.vector_stores.redis import RedisVectorStore
redis_vector_store_policy = RedisVectorStore(
index_name="policy_index",
redis_url="redis://localhost:6379/",
overwrite=True
)
redis_vector_store_leave = RedisVectorStore(
index_name="leave_index",
redis_url="redis://localhost:6379/",
overwrite=True,
metadata_fields=["index_leave"]
)
BaseNode
objects and add them to Redis using the add
method of your RedisVectorStore
instances. You'll need to subclass BaseNode
to fit your data structure.from llama_index.core.schema import BaseNode
class CustomNode(BaseNode):
# Implement necessary methods and properties here
nodes_policy = [CustomNode(index_policy=data) for data in base_nodes+objects]
nodes_leave = [CustomNode(index_leave=data) for data in list_of_documents_leave]
redis_vector_store_policy.add(nodes_policy)
redis_vector_store_leave.add(nodes_leave)
query
method of the RedisVectorStore
class. For exact matches, especially for index_leave
, you'll need to construct a VectorStoreQuery
object with appropriate MetadataFilters
.from llama_index.core.vector_stores.types import MetadataFilters, MetadataFilter, VectorStoreQuery
metadata_filters = MetadataFilters(
filters=[
MetadataFilter(key="index_leave", value="desired_leave_value", operator="eq")
]
)
query_embedding = [0.1, 0.2, 0.3] # Use your actual query embedding
query = VectorStoreQuery(query_embedding=query_embedding, filters=metadata_filters, similarity_top_k=10)
result = redis_vector_store_leave.query(query)
This approach allows you to persist your indexes in Redis and query them efficiently, including applying filters for exact matches. Make sure to adjust the placeholders and example values to match your actual data and requirements.
To continue the conversation, mention @dosu.
I got an error
Deprecated kwarg, index_name, found upon initialization. RedisVectorStore now requires an IndexSchema object
@logan-markewich Can you help me for this case? if you want more information don't hesitate ask
@Ninlawat-Puhu You can use the redis vector store https://docs.llamaindex.ai/en/stable/examples/vector_stores/RedisIndexDemo/?h=redisve
Question Validation
Question
I have two index that keep different data as below code
index_policy = VectorStoreIndex(nodes = base_nodes+objects)
index_leave = VectorStoreIndex.from_documents(list_of_documents_leave)
query_engine_leave = index_leave.as_query_engine(similarity_top_k=2,filters = filters,streaming=True)
query_engine_policy = index_policy.as_query_engine(similarity_top_k=6,streaming=True)
But I want to save those index (index_policy,index_leave) in redis database to persist data and I want to call index from redis database instead of local paths. Moreover, with index_leave, I also used ExactMatchFilter for filter information in some columns.
Can you recommend how to develop the code from my requirement?