Closed jerinjoseph1 closed 6 months ago
Hey there, @jerinjoseph1! 👋 Great to see you diving into another adventure with LlamaIndex. How's the coding marathon going?
Based on the error message you provided, it seems like the issue might be related to the parsing of the output from the ReActAgent
. Here are some steps you can take to troubleshoot this issue:
Check JSON Input Format: Ensure that the JSON input provided to the tool call is correctly formatted. This can be a common source of ValueError
as seen in the default_tool_call_parser
and advanced_tool_call_parser
functions. These functions attempt to parse the JSON input and will raise a ValueError
if the JSON is malformed or if it does not conform to expected patterns.
Verify Tool Name Exists: Ensure that the tool name specified in the tool call exists within the tools available to the ReActAgent
. The get_function_by_name
function is used to retrieve the tool by name, and it will raise a ValueError
if the specified tool name does not exist in the list of tools provided to the ReActAgent
.
Review Error Handling in Tool Calls: The call_tool_with_error_handling
function is designed to handle errors gracefully during tool execution. If the error persists despite correct JSON input and an existing tool name, consider setting the raise_error
flag to True
to get more detailed error information, which can help in diagnosing the problem.
Check Tool Call Parser: If you are using a custom tool call parser by specifying the tool_call_parser
parameter in the OpenAIAgentWorker
, ensure that the parser correctly handles the input format you are using. Misinterpretation by the parser can lead to ValueError
.
Debugging and Logging: Increase the logging level to get more detailed logs that might help identify the issue. You can adjust the logger.setLevel(logging.WARNING)
to a more verbose level like logging.DEBUG
to get more detailed output.
You can find more details about these functions in the LlamaIndex codebase.
Also, it's worth noting that you're using LlamaIndex version v0.10.20, and there have been several updates since then. The latest version is v0.10.25, and it includes several bug fixes and improvements. You can check the changelog for more details. Updating to the latest version might help resolve the issue.
If the problem persists after trying these steps, please provide more details about the error, such as the full stack trace and any additional logs. This will help in diagnosing the problem more accurately.
@jerinjoseph1 you have the LLM configured to only output a small number of tokens (it looks like it got cut off tbh)?
Either that, or the LLM just didn't follow instructions for the output format, and the react loop couldn't be parsed
Yes, you'r right. It was a mistake from my part, I missed the number of tokens part. Passing context_window
argument to OpenAILike resolved the issue.
Bug Description
I assigned an open ai compatible server using OpenAILike using:
from llama_index.llms.openai_like import OpenAILike llm = OpenAILike(model="TheBloke/Mistral-7B-Instruct-v0.1-AWQ", api_base="https://myserver.net/v1", api_key="fake")
This successfully gives outputs when questions are asked using chat method. For e.g:
response = llm.complete("Who is the ceo of google") print(str(response))
Successfully gives the result: ? Sundar Pichai is the CEO of Google.But when an agent is created as mentioned in the llamaindex docs (https://docs.llamaindex.ai/en/stable/examples/agent/multi_document_agents-v1/?h=fnretrieveropenaiagent) and we query something error is popped up
ValueError: Could not parse output: Thought: The current language of the user is: en. I need
The sample query i gave for my custom docs is: `top_agent = ReActAgent.from_tools( tool_retriever = obj_index.as_retriever(similarity_top_k=3), system_prompt=""" \ You are an agent designed to answer informations about Reflexa. Please always use the tools provided to answer a question. Do not rely on prior knowledge.\
""", llm=llm, verbose=True, ) response = top_agent.query("Who created the company document for Reflexa?") `
Version
v0.10.20
Steps to Reproduce
I followed this tutorial https://github.com/sugarforever/Advanced-RAG/blob/main/03_llama_index_multi_doc_agent.ipynb but replaced llm with openAILike module from llama-index
Relevant Logs/Tracbacks
No response