Closed derOlek closed 1 year ago
🤖
Hello @derOlek,
I'm here to assist you with your issue. I'm an AI bot designed to help solve bugs, answer questions, and guide you in becoming a contributor while you're waiting for a human maintainer.
Based on the error message you provided, it seems that the prompt
argument is expected to be a string, but instead, an instance of PromptTemplate
is being found. This is likely due to the way the query
method of the KnowledgeGraphQueryEngine
class is being used.
In your code, you're using the query
method like this:
response = query_engine.query("Tell me about Starlord") #CRASHES HERE
The query
method is expecting a string as an argument. If you're passing a PromptTemplate
instance instead of a string, you'll encounter the ValueError you're seeing.
There was a similar issue in the LlamaIndex repository: [Documentation]: custom LLM PromptTemplate no longer needed. This issue was resolved by a pull request: support both str and templates for query_wrapper_prompt in HF LLMs. The solution was to support both strings and PromptTemplate
instances for the query_wrapper_prompt
in Hugging Face LLMs.
In your case, you might want to ensure that you're passing a string to the query
method. If you're still encountering the ValueError, please check that the variables you're providing to the query
method are indeed strings. If they're not, you'll need to convert them to strings before passing them to the method.
I hope this helps! If you have any more questions or need further clarification, feel free to ask.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Make sure you use the LlamaCPP
and HuggingFaceEmbedding
from llama-index, and also pass LlamaCpp
as llm
instead of llm_predictor
into the service context.
Bug Description
When running a Llama model locally and trying to query a nebulagraph based knowledge graph index the following error is thrown:
I am running nebulagraph inside a local docker container. The llama_index version is 0.8.53.post3. Below you can find the code to reproduce this issue.
In addition to that I noticed that no data is written to nebulagraph. The space that is given just stays empty. But thats not the problem here ;)
Version
0.8.53.post3
Steps to Reproduce
Below is the code I use:
Relevant Logs/Tracbacks