Closed fbpo23 closed 5 months ago
Hey there, @fbpo23! I'm here to help you with any issues you're facing. I can assist with bug fixes, answer questions, and guide you on becoming a contributor. Let's work together to resolve the problem you've encountered.
The issue you're encountering with the SynthesizeEndEvent
and QueryEndEvent
wiping out the response in Llama Index is indeed puzzling, especially since the NullEventHandler
is supposed to be a no-op. Here are a few steps and considerations to help you debug and potentially resolve this issue:
Ensure Proper Event Handling:
NullEventHandler
should not affect the response, but ensure that no other handlers are registered for these events.Check for Side Effects:
NullEventHandler
does nothing, ensure that there are no side effects or state changes happening elsewhere in the code that might be triggered by these events.Debugging Steps:
SynthesizeEndEvent
and QueryEndEvent
to capture the state of the response before and after these events. This might help pinpoint where the response is being altered.Review Event Dispatching Logic:
Alternative Approach:
SynthesizeEndEvent
and QueryEndEvent
if they are not critical for your use case. However, this is not a long-term solution.Here is a snippet of how you might add more detailed logging:
# In llama_index.core.base.base_query_engine.py
# Around lines 53-55
print(f"Response before QueryEndEvent: {response}")
# Existing code for QueryEndEvent
print(f"Response after QueryEndEvent: {response}")
# In llama_index.core.response_synthesizers.base.py
# Around lines 255-260
print(f"Response before SynthesizeEndEvent: {response}")
# Existing code for SynthesizeEndEvent
print(f"Response after SynthesizeEndEvent: {response}")
Additionally, ensure that the output_cls
parameter is correctly defined and that the response conforms to the expected structure of the ResponseModel
class.
If the issue persists, consider creating a minimal reproducible example and sharing it with the maintainers for further investigation. This will help them understand the context and provide a more targeted solution.
To continue talking to Dosu, mention @dosu.
@fbpo23 try from pydantic.v1 import BaseModel
instead
@fbpo23 try
from pydantic.v1 import BaseModel
instead
That worked! Thanks a lot. I am curious though, why would the response get empty after that event, if it has no logic inside of it?
I have no idea -- I think it's related to how we internally use the pydantic v1 bridge, and there's some weird pydantic magic happening when classes from v1 and v2 touch each other
Hoping to move to v2 soon 🙏🏻
I see. Thanks for the help!
Bug Description
I am trying to get output from llama index's query engine, however the output is coming out empty. I tried diving into llama index's source code to find out the problem and I found that the response is being correctly retrieved from the LLM as JSON, but is getting wiped out both with the
SynthesizeEndEvent
and in theQueryEndEvent
(the latter was verified by commenting theSynthesizeEndEvent
so that the response reaches theQueryEndEvent
intact).If I comment both of these events, the output is correctly parsed as the
output_cls
I defined on the query engine.I am using AzureOpenAI as the LLM and Hugging face embedding using the
sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
transformer.To debug it out further, I tried going into the dispatcher code and try to print out the name of the event that was being triggered and the handler that was handling it, in hopes on finding out which code part was causing this. Furthermore, I also printed out the response before and after those events. the result is the following:
Investigating into this NullEventHandler, it seemed to me that it was a template event handler, and it had no logic in it, therefore it couldn't really affect the response. But it somehow was.
Which lead me into writing this issue. Any suggestions?
Version
llama-index-agent-openai==0.2.7 llama-index-core==0.10.43 llama-index-embeddings-huggingface==0.2.1 llama-index-llms-azure-openai==0.1.8 llama-index-llms-openai==0.1.22 llama-index-readers-file==0.1.23 llama-index-vector-stores-postgres==0.1.7
Steps to Reproduce
Run the following code and put some txt in the
data
folder with some information for RAG:With this code the output comes out empty. However, if I comment lines 53-55 of
llama_index.core.base.base_query_engine.py
(QueryEndEvent
) and lines 255-260 ofllama_index.core.response_synthesizers.base.py
(SynthesizeEndEvent
) The output come out filled as it is supposed to.Relevant Logs/Tracbacks
No response