Closed Gyamfi2 closed 7 months ago
🤖
Hi @Gyamfi2! I'm Dosu, a friendly bot here to assist you with your questions and issues while we wait for a human maintainer. I'm good at troubleshooting bugs and helping contributors like you navigate through this project. Feel free to ask me anything, and I'll do my best to help.
Based on your question, it seems you're trying to modify the keyword extraction process to suit your specific needs. The LlamaIndex framework is designed to be flexible and allows for customization of its core functions.
To modify the extraction of keywords from the query and chat engine in the LlamaIndex knowledge graph index to be compatible with your nebula graph query in the streamlit app, you can customize the _extract_triplets
and _llm_extract_triplets
methods in the KnowledgeGraphIndex
class.
Here is an example of how you can modify these methods:
def _extract_triplets(self, text: str) -> List[Tuple[str, str, str]]:
if self._kg_triplet_extract_fn is not None:
return self._kg_triplet_extract_fn(text)
else:
return self._nebula_extract_triplets(text)
def _nebula_extract_triplets(self, text: str) -> List[Tuple[str, str, str]]:
"""Extract keywords from text using nebula graph query."""
# Implement your nebula graph query here
# and return the extracted triplets
In the _extract_triplets
method, you can see that it checks if a custom function (_kg_triplet_extract_fn
) is provided. If it is, it uses that for extraction. If not, it defaults to the _llm_extract_triplets
method. You can replace this with your own method (_nebula_extract_triplets
in the example) that implements your nebula graph query.
As for extracting keywords directly from the chat engine when verbose=True
, the current context does not provide information on how the chat engine is implemented or how it interacts with the KnowledgeGraphIndex
class. Therefore, it's not possible to provide a specific solution without additional information. However, you can generally modify the chat engine to call the _extract_triplets
method whenever a new message is received and verbose=True
.
You can find more details about these methods in the KnowledgeGraphIndex class in the LlamaIndex codebase.
I hope this helps! If you have further questions or need more clarification, feel free to ask.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Question Validation
Question
How do I extract the keywords from the query and chat engine from the knowledge graph index so that I use it to query my nebula graph on the streamlit app. For the query, I realized you used it did the assignment related_entities = list(list(response.metadata.values())[0]["kg_rel_map"].keys()) but the output format does not make the nebula graph query "f"MATCH p=(n)-[*1..2]-() \n WHERE id(n) IN {related_entities} \nRETURN p"" produce the intended results for the graph. It gives empty graph because related_entities format isn't recognized entity. Example, i mostly obtain the entity format like ['NLP{name: NLP}'] instead of ['NLP'] and like this ['NLP{name: NLP}', 'Language{name: Language}'] instead of ['NLP', 'Language']. Is there way you can assist me resolve this? Again, is there any way to get the extracted keywords for the chat engine too? I understand that when verbose=True, the intermediate results like extraced keywords are obtained. I'm curious, is there any way to extract the key words directly?