run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.23k stars 5.16k forks source link

[Bug]: ToolInteractiveReflectionAgentWorker not doing corrective reflection #16609

Open BurnyCoder opened 3 days ago

BurnyCoder commented 3 days ago

Bug Description

Hello. I tried exactly the code here line by line but with a different contents of the tool (shouldn't matter): https://docs.llamaindex.ai/en/stable/examples/agent/introspective_agent_toxicity_reduction/ https://www.youtube.com/watch?v=OLj5MFNHP0Q with main_agent_worker, because it being None crashes it:

 File "/home/burny/.local/lib/python3.11/site-packages/llama_index/agent/introspective/step.py", line 149, in run_step
    reflective_agent_response = reflective_agent.chat(original_response)
                                                      ^^^^^^^^^^^^^^^^^
UnboundLocalError: cannot access local variable 'original_response' where it is not associated with a value

But on one device I see no LLM critic responces in terminal, and on other device with the same exact code I see:

=== LLM Response ===
Hello! How can I assist you today?
Critique: Hello! How can I assist you today?
Correction: HTTP traffic consisting solely of POST requests is considered suspicious for several reasons:

with no correction actually happening in the two agent communication.

I tried downgrading to llamaindex version at the time of when that example was written, but I get same behavior

pip install --upgrade --force-reinstall \
llama-index-agent-introspective==0.1.0 \
llama-index-llms-openai==0.1.19 \
llama-index-agent-openai==0.2.5 \
llama-index-core==0.10.37

Version

Newest, but i tried downgrading

Steps to Reproduce

Run https://docs.llamaindex.ai/en/stable/examples/agent/introspective_agent_toxicity_reduction/ https://www.youtube.com/watch?v=OLj5MFNHP0Q

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 3 days ago

The issue you're encountering with the ToolInteractiveReflectionAgentWorker not performing corrective reflection seems to be related to the initialization of the main_agent_worker. If main_agent_worker is None, it leads to an UnboundLocalError because original_response is not assigned a value in that case [1].

To resolve this, ensure that main_agent_worker is properly initialized before it is used in the run_step method. This involves setting up the main_agent_worker with the necessary tools and configurations as shown in the example you are following. Double-check that all dependencies are correctly installed and that the setup code is executed without errors.

Regarding the inconsistent behavior across different devices, it might be due to differences in the environment or configurations. Ensure that both devices have the same versions of all dependencies and that the environment variables and configurations are identical.

If the problem persists, consider adding logging or print statements to verify the initialization and state of main_agent_worker and other critical variables at runtime. This can help identify where the setup might be failing.

To continue talking to Dosu, mention @dosu.

BurnyCoder commented 3 days ago

@dosu Hello. I tried exactly the code here line by line but with a different contents of the tool (shouldn't matter): https://docs.llamaindex.ai/en/stable/examples/agent/introspective_agent_toxicity_reduction/ https://www.youtube.com/watch?v=OLj5MFNHP0Q

But I see no LLM critic responces in terminal:

=== LLM Response ===
Hello! How can I assist you today?
Critique: Hello! How can I assist you today?
Correction: HTTP traffic consisting solely of POST requests is considered suspicious for several reasons:

with no correction actually happening in the two agent communication.

I tried downgrading to llamaindex version at the time of when that example was written, but I get same behavior

pip install --upgrade --force-reinstall \
llama-index-agent-introspective==0.1.0 \
llama-index-llms-openai==0.1.19 \
llama-index-agent-openai==0.2.5 \
llama-index-core==0.10.37
dosubot[bot] commented 3 days ago

To address the issue of not seeing LLM critique responses and corrections in the terminal, ensure the following:

  1. Initialization of main_agent_worker: Make sure that main_agent_worker is properly initialized. If it is None, the original_response will not be set, leading to errors. Ensure that the main_agent_worker is correctly set up with the necessary tools and configurations [1].

  2. Verbose Mode: Enable verbose mode to get detailed logs of the operations. This can help you see what the agent is doing and where it might be failing. You can set the verbose parameter to True when initializing the agent workers [1].

  3. Environment Consistency: Verify that both devices have the same environment setup, including the same versions of Python, LlamaIndex, and all dependencies. Differences in environment configurations can lead to inconsistent behavior.

  4. Logging and Debugging: Use logging to trace the flow of data. Implement logging as shown below to capture detailed information about the execution:

    import logging
    import sys
    
    logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
    logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
  5. Check Input and Output: Ensure that the input text is appropriate for the task and that the agent is set up to handle the specific type of input you are providing. Double-check the implementation details to ensure that the input is being processed correctly.

If these steps do not resolve the issue, consider reviewing the setup code for any discrepancies or errors that might be affecting the agent's ability to perform critiques and corrections.

To continue talking to Dosu, mention @dosu.