Closed kiasar closed 10 months ago
π€
Hello @kiasar! I'm Dosu, a friendly bot here to assist you with your LlamaIndex related queries, bug resolutions, and contributions. While you're waiting for a human maintainer, I'll be helping out. I'm currently reviewing the issue you've raised and will provide a comprehensive response shortly. Thanks for your patience!
π€
Thank you for providing the detailed information. Based on the context provided, it seems like you're encountering an "Empty Response" error when running the official document code on LlamaIndex v0.8.65. This could be due to a few reasons:
The file you're trying to read is indeed being read, but it might not contain any text that the specific reader (PDFReader, DocxReader, or HWPReader) can extract. This could happen if the file is an image-based PDF or a DOCX file with images only. You can check the content of your .txt file to ensure it contains extractable text.
There might be an issue with the encoding of the file. If the file's encoding doesn't match the encoding expected by the reader, the reader might not be able to extract any text, resulting in an "Empty Response". You can check the encoding of your .txt file and ensure it matches the expected encoding.
There might be an issue with the specific reader you're using. If there's a bug in the reader, it might not be able to extract text correctly, even if the file is being read and contains extractable text. You can try using a different reader to see if the issue persists.
If you're using the HWPReader, it's possible that the file you're trying to read isn't a valid HWP file. The HWPReader checks if the file is a valid HWP file by looking for specific sections in the file. If these sections aren't found, the reader raises an exception. You can check if your .txt file is a valid HWP file.
You can refer to the llama_index/readers/file/docs_reader.py for more information on how the readers work.
In terms of the query_engine.query()
function, if it doesn't find a match for the query in the provided documents, it will raise a ValueError
with the message "No source nodes passed evaluation." This is different from returning an empty list. You can refer to the llama_index/query_engine/graph_query_engine.py and llama_index/query_engine/retry_source_query_engine.py for more information on how the query_engine.query()
function works.
If you're still encountering the issue after checking these possibilities, could you please provide more specific information about the error and the context in which it's occurring? This will help in providing a more specific solution.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@kiasar try lowering the context window a bit. "Empty Response" with open-source LLMs is usually because the token counting broke.
Tbh that is a terrible example code, I should really replace that to be more simple.
If you want to use a model from huggingface, we have a few demo notebooks here https://docs.llamaindex.ai/en/stable/module_guides/models/llms.html#open-source-llms
Thanks, finally it worked with context_window = 350
. making it less than 300 and bigger than 400 will both result in errors.
I'll try the link you've provided. Thanks.
I hit this using llamaindex against SageMaker providing llama2 model. context_window = 350
really helped, but that is unusable. Querying become slow and output is small.
Bug Description
I'm trying to run your official document code but I get "Empty Response".
I have only one .txt file containing "I have a cat named Max. It is in the park." and here is the version of my code:
It will return "Empty Response".
P.S. I'm sure that it is able to read the .txt file because this is the output to
print(documents)
Version
085.65
Steps to Reproduce
Just run the code.
Relevant Logs/Tracbacks
No response