Open saidineshpola opened 2 months ago
For your scenario with very limited and frequently changing data, you can indeed bypass using a VectorStoreIndex
in RAG by leveraging alternative approaches such as a KnowledgeGraphRAGQueryEngine
. This method is particularly useful as it allows for dynamic querying of a Knowledge Graph without the overhead of maintaining a vector index, making it ideal for small or frequently updated datasets.
To implement this, you would need to build or use an existing Knowledge Graph and then utilize the KnowledgeGraphRAGQueryEngine
for querying. This approach dynamically generates responses based on the current data in the Knowledge Graph, thus eliminating the need for a pre-built vector index.
For more detailed guidance and examples on how to set this up, you can refer to the LlamaIndex documentation and example code provided in their repository. This will give you a clear understanding of how to leverage Knowledge Graphs for efficient information retrieval in scenarios like yours.
To continue the conversation, mention @dosu.
Question Validation
Question
I just want to know whether we can call the query engine in RAG with out vectorstoreindex as I have very limited data for retrieval and it will be changed at every inference call `all_sentences=['Document1 data','Document2 data'] documents = [Document(text=sent) for sent in all_sentences] self.index = VectorStoreIndex.from_documents( documents, service_context=self.service_context)
Query the index with the prompt