run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
33.36k stars 4.67k forks source link

[Question]:How do I customize the search workflow #12917

Open dtsgx126 opened 2 months ago

dtsgx126 commented 2 months ago

Question Validation

Question

I have used llama_index to build vector database and neo4j knowledge graph based on local knowledge base, and the query effect is very good. But when it comes to querying, the response is slow. In the whole process, openai parses the original problem to obtain the data (A), and then the A data is sent to the vector library and neo4j for query. The query context is summarized and processed to (B) and fed into openai. Now I want to analyze and debug A and B. And how to directly feed it into llama_index if you already have an A or B. Is there an expert who can help me? Thank you

dosubot[bot] commented 2 months ago

Hey there, @dtsgx126! 👋 I'm here to lend a hand while you're navigating through the cosmos of LlamaIndex. Think of me as your friendly neighborhood bot, ready to tackle bugs, answer your burning questions, and help you contribute more effectively. While we wait for a human maintainer to join us, I'll be diving into your issue to provide you with a detailed response soon. Stay tuned!

dtsgx126 commented 2 months ago

my parameters and how I use llama_index are as follows: graph_store = Neo4jGraphStore( username=username, password=password, url=url, database=database )

storage_context = StorageContext.from_defaults(graph_store=graph_store, persist_dir="./neo4j_storage")

index = load_index_from_storage( storage_context=storage_context, max_triplets_per_chunk=3, include_embeddings=True, verbose=True, )

query_engine = index.as_query_engine( include_text=True, response_mode="tree_summarize", embedding_mode="hybrid", similarity_top_k=10, explore_global_knowledge=True ) query_str = "some quesitions" response = query_engine.query(query_str)