run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.94k stars 5.11k forks source link

[Question]: OpenAILike when querying with ReActAgent gives ValueError: Could not parse output: Thought: The current language of the user is: en. I need #12460

Closed jerinjoseph1 closed 6 months ago

jerinjoseph1 commented 6 months ago

Bug Description

I assigned an open ai compatible server using OpenAILike using: from llama_index.llms.openai_like import OpenAILike llm = OpenAILike(model="TheBloke/Mistral-7B-Instruct-v0.1-AWQ", api_base="https://myserver.net/v1", api_key="fake")

This successfully gives outputs when questions are asked using chat method. For e.g: response = llm.complete("Who is the ceo of google") print(str(response)) Successfully gives the result: ? Sundar Pichai is the CEO of Google.

But when an agent is created as mentioned in the llamaindex docs (https://docs.llamaindex.ai/en/stable/examples/agent/multi_document_agents-v1/?h=fnretrieveropenaiagent) and we query something error is popped up ValueError: Could not parse output: Thought: The current language of the user is: en. I need

The sample query i gave for my custom docs is: `top_agent = ReActAgent.from_tools( tool_retriever = obj_index.as_retriever(similarity_top_k=3), system_prompt=""" \ You are an agent designed to answer informations about Reflexa. Please always use the tools provided to answer a question. Do not rely on prior knowledge.\

""", llm=llm, verbose=True, ) response = top_agent.query("Who created the company document for Reflexa?") `

Screenshot 2024-04-01 at 4 07 05 PM

Version

v0.10.20

Steps to Reproduce

I followed this tutorial https://github.com/sugarforever/Advanced-RAG/blob/main/03_llama_index_multi_doc_agent.ipynb but replaced llm with openAILike module from llama-index

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 6 months ago

Hey there, @jerinjoseph1! 👋 Great to see you diving into another adventure with LlamaIndex. How's the coding marathon going?

Based on the error message you provided, it seems like the issue might be related to the parsing of the output from the ReActAgent. Here are some steps you can take to troubleshoot this issue:

  1. Check JSON Input Format: Ensure that the JSON input provided to the tool call is correctly formatted. This can be a common source of ValueError as seen in the default_tool_call_parser and advanced_tool_call_parser functions. These functions attempt to parse the JSON input and will raise a ValueError if the JSON is malformed or if it does not conform to expected patterns.

  2. Verify Tool Name Exists: Ensure that the tool name specified in the tool call exists within the tools available to the ReActAgent. The get_function_by_name function is used to retrieve the tool by name, and it will raise a ValueError if the specified tool name does not exist in the list of tools provided to the ReActAgent.

  3. Review Error Handling in Tool Calls: The call_tool_with_error_handling function is designed to handle errors gracefully during tool execution. If the error persists despite correct JSON input and an existing tool name, consider setting the raise_error flag to True to get more detailed error information, which can help in diagnosing the problem.

  4. Check Tool Call Parser: If you are using a custom tool call parser by specifying the tool_call_parser parameter in the OpenAIAgentWorker, ensure that the parser correctly handles the input format you are using. Misinterpretation by the parser can lead to ValueError.

  5. Debugging and Logging: Increase the logging level to get more detailed logs that might help identify the issue. You can adjust the logger.setLevel(logging.WARNING) to a more verbose level like logging.DEBUG to get more detailed output.

You can find more details about these functions in the LlamaIndex codebase.

Also, it's worth noting that you're using LlamaIndex version v0.10.20, and there have been several updates since then. The latest version is v0.10.25, and it includes several bug fixes and improvements. You can check the changelog for more details. Updating to the latest version might help resolve the issue.

If the problem persists after trying these steps, please provide more details about the error, such as the full stack trace and any additional logs. This will help in diagnosing the problem more accurately.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
logan-markewich commented 6 months ago

@jerinjoseph1 you have the LLM configured to only output a small number of tokens (it looks like it got cut off tbh)?

Either that, or the LLM just didn't follow instructions for the output format, and the react loop couldn't be parsed

jerinjoseph1 commented 6 months ago

Yes, you'r right. It was a mistake from my part, I missed the number of tokens part. Passing context_window argument to OpenAILike resolved the issue.