Closed ramakrse closed 3 weeks ago
š¤
Hey @ramakrse! Great to see you diving into another challenge with LlamaIndex. What's cooking this time?
Based on the error logs you've provided, it seems like the tools
argument in the chat
method of the llama_index/agent/openai/step.py
file is causing the issue. The tools
argument is a list of BaseTool
instances used to perform various operations within the LlamaIndex framework. If the function name provided does not exist in the tools
list, a ValueError
is raised, which could be interpreted as a 404 error in a web context.
To fix the error, ensure that the tool name provided exists in the tools
list. Here is the relevant code:
def get_function_by_name(tools: List[BaseTool], name: str) -> BaseTool:
"""Get function by name."""
name_to_tool = {tool.metadata.name: tool for tool in tools}
if name not in name_to_tool:
raise ValueError(f"Tool with name {name} not found")
return name_to_tool[name]
You can find this in the llama-index-legacy/llama_index/legacy/agent/openai/step.py file.
Additionally, there are a few similar issues in the LlamaIndex repository that have been resolved. You might find the discussions and solutions helpful:
These issues suggest using the correct Azure API version, the latest version of llama-index, and ensuring that your AzureOpenAI endpoint is set up correctly.
Unfortunately, I couldn't find any changes made in the LlamaIndex codebase between version 0.10.6 and the latest version 0.10.12 that could potentially resolve the openai.NotFoundError: Error code: 404
error. It might be worth trying to update to the latest version to see if the issue persists.
If you continue to experience this issue, could you provide more information about your setup? Specifically, the tool names you're using and how you're setting up your tools
list could be helpful for further troubleshooting.
With azure openai, the minimum api version you need is 2023-07-01-preview
Microsoft has been quite slow to adapt to openais tools api
@logan-markewich for your quick response.
I am using version 2023-12-01-preview
and it throws error
@dosu-bot, I am using simple high level api index.as_chat_engine
not agent with tools.
š¤
Based on the context provided, it seems like you're encountering an openai.NotFoundError: Error code: 404
error due to an unrecognized request argument 'tools'. This error typically occurs when the requested resource is not found. In this case, it seems like the OpenAI model "gpt-4" you're trying to use is not available or not found.
As of the current OpenAI API version 2023-12-01-preview
, there is no "gpt-4" model available. The available models are "gpt-3", "davinci", "curie", "babbage", and "ada".
To fix this issue, you should use an available model. Here's how you can modify your code:
from llama_index import OpenAI, SimpleDirectoryReader, VectorStoreIndex
llm = OpenAI(model="gpt-3") # Use an available model
data = SimpleDirectoryReader(input_dir="./data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
chat_engine = index.as_chat_engine(chat_mode="best", llm=llm, verbose=True)
response = chat_engine.chat("What are the first programs Paul Graham tried writing?")
print(response)
Please replace "gpt-3" with the model you have access to and try again. You can refer to the chat_engine_best.ipynb example for more details.
Regarding the changes between LlamaIndex library version 0.10.6 and the latest version v0.10.12, I wasn't able to find specific changes that could potentially resolve the openai.NotFoundError: Error code: 404
error. However, it's generally recommended to use the latest version of the library to benefit from the latest features and bug fixes.
If you continue to experience issues, please provide more details about your setup and the exact error message you're seeing. This will help in diagnosing the problem more accurately.
@ramakrse are you using gpt-4? I've seen the base gpt-4 version throw this on azure, need to use a preview version of gpt-4 I think
@logan-markewich, I am using gpt-35-turbo
. I found one mistake that while setting llm using AzureOpenAI, there is one parameter engine
missed. I fixed that.. Now it throws he below error ValueError: Tool with name functions not found
File "/Users/localuser/Documents/AnacondaProjects/genAI2/azure_llm/maintenance_chat_assistance_V2.py", line 260, in main
response = chat_engine.chat(prompt)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/core/agent/runner/base.py", line 575, in chat
chat_response = self._chat(
^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/core/agent/runner/base.py", line 520, in _chat
cur_step_output = self._run_step(
^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/core/agent/runner/base.py", line 372, in _run_step
cur_step_output = self.agent_worker.run_step(step, task, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/agent/openai/step.py", line 572, in run_step
return self._run_step(
^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/agent/openai/step.py", line 469, in _run_step
self._call_function(
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/agent/openai/step.py", line 356, in _call_function
EventPayload.TOOL: get_function_by_name(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/agent/openai/step.py", line 50, in get_function_by_name
raise ValueError(f"Tool with name {name} not found")
ValueError: Tool with name functions not found
@dosu-bot, I tried with the chat_engine_best.ipynb example.
I have setup the azure openai Settings as per documentation. During the process, I have figured that llm, we need to use engine
. After that, I get error 404 goes away. Now, new error ValueError: Tool with name functions not found
File "/Users/localuser/Documents/AnacondaProjects/genAI2/azure_llm/paul.py", line 104, in main
response = chat_engine.chat("What are the first programs Paul Graham tried writing?")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/core/agent/runner/base.py", line 575, in chat
chat_response = self._chat(
^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/core/agent/runner/base.py", line 520, in _chat
cur_step_output = self._run_step(
^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/core/agent/runner/base.py", line 372, in _run_step
cur_step_output = self.agent_worker.run_step(step, task, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/agent/openai/step.py", line 572, in run_step
return self._run_step(
^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/agent/openai/step.py", line 469, in _run_step
self._call_function(
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/agent/openai/step.py", line 356, in _call_function
EventPayload.TOOL: get_function_by_name(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/genAI2/lib/python3.11/site-packages/llama_index/agent/openai/step.py", line 50, in get_function_by_name
raise ValueError(f"Tool with name {name} not found")
š¤
Based on the error traceback you provided, it seems like the get_function_by_name
function is trying to find a tool named "functions" but it's not finding any, hence the ValueError: Tool with name functions not found
.
This could be due to a few reasons:
tools
list.To resolve this issue, you need to ensure that the tool with the name "functions" exists in the tools
list. Here's how you can do it:
Check the tools
list to see if the tool with the name "functions" exists. You can print out the tools
list to see all the available tools and their names.
If the tool exists, make sure that the name you're providing matches exactly with the name of the tool in the tools
list. Remember that Python is case-sensitive.
Here's a sample code snippet to print out the tools
list:
for tool in tools:
print(tool.metadata.name)
This will print out the names of all the tools in the tools
list. You can then check if the tool you're trying to access exists and if the name you're providing matches exactly with the name of the tool.
In addition, I found two similar issues in the LlamaIndex repository that might be helpful:
The first issue suggests modifying the code to include a check that ensures the tool exists before trying to call it. The second issue doesn't provide a specific solution for the error, but it might be worth checking out the discussion for any insights.
I hope this helps! If you have any other questions or need further clarification, feel free to ask.
@dosu-bot, I am using index.as_chat_engine method. This method does call for setting up tools. your recommendation is not sutiable.
š¤
Based on the error traceback you provided, it seems like the OpenAIAgentWorker
is trying to call a tool named "functions" that it can't find. This is likely causing the ValueError: Tool with name functions not found
you're encountering.
The get_function_by_name
function, which is raising this error, is designed to retrieve a specific tool from a list of tools by its name. If the tool is not found in the list, it raises a ValueError
. This suggests that the tool named "functions" is not included in the list of tools you're providing to the OpenAIAgentWorker
instance, or there's a typo or case-sensitivity issue with the tool name.
There are a few potential solutions to this issue based on similar solved issues in the LlamaIndex repository:
Modify the code to include a check that ensures the tool exists before trying to call it. This will prevent the agent from crashing when it tries to call a non-existent tool. You can refer to the solution provided in the issue #7170 for more details.
Ensure you're using the correct Azure API version. Update the api_version
to "2023-07-01-preview" or the latest version supported by your Azure deployment. Verify that your AzureOpenAI endpoint is set up correctly and that it accepts the tools
and tool_choice
arguments. You can refer to the solution provided in the issue #9195 for more details.
Openai updated their API. The correct kwarg is tool_choice="query_engine_tool"
. You can refer to the solution provided in the issue #9315 for more details.
Update LlamaIndex to the latest version if you haven't already. The latest version of LlamaIndex is v0.10.12.
If none of these solutions work, please provide more specific information about how you're initializing and using the OpenAIAgentWorker
and the tools you're providing. This will help us better understand the issue and provide a more precise solution.
@dosu-bot. Any work around available? If not give the steps of low-level APIs to set up the chat engine for Azure Openai using 0.10.6 instead of index.as_chat_engine high level API
@dosu-bot. Any work around available? If not give the steps of low-level APIs to set up the chat engine for Azure Openai using 0.10.6 instead of index.as_chat_engine high level API
š¤
Based on the issues you've encountered and the context provided, it seems like the error you're experiencing is due to the tool named "functions" not being recognized. This could be due to a misconfiguration or a typo in the tool's name.
There are two similar solved issues in the LlamaIndex repository that might be helpful:
[Bug]: Agent sometimes tries to use tools that do not exist and then crashes suggests adding a check to ensure that the tool exists before trying to call it. This involves catching the ValueError
and skipping the tool call if the tool does not exist, to prevent a crash. Another user suggested telling the OpenAI agent to stop prepending "functions." to the actual tool name in the system prompt, which resolved a similar issue for them.
[Bug]: #2129 was closed as resolved but I am still having the same error even with a completely new install and container environment, and others are reporting the same error on discord discusses a similar error and might provide some additional context or solutions.
As for setting up the chat engine using the low-level APIs, you can follow the pattern demonstrated in the test_from_openai_messages_function_calling_azure
function. This function shows how to manually convert messages from the Azure OpenAI format to the LlamaIndex format using fixtures that simulate the Azure OpenAI message structure and the corresponding LlamaIndex chat messages.
Here's a simplified example:
# Assuming azure_openai_message_dicts_with_function_calling is a list of Azure OpenAI messages
azure_openai_messages = azure_openai_message_dicts_with_function_calling()
# Convert Azure OpenAI messages to LlamaIndex chat messages
llama_index_chat_messages = from_openai_messages(azure_openai_messages)
# Use the converted LlamaIndex chat messages in your chat engine
# This step depends on your chat engine's API and how it processes messages
process_chat_messages(llama_index_chat_messages)
Unfortunately, I couldn't find any specific changes between LlamaIndex version 0.10.6 and the latest version v0.10.13.post1 that could potentially resolve the ValueError: Tool with name functions not found
error. It's possible that the answer may be available elsewhere or I could have missed it.
I hope this helps! If you have any more questions or need further clarification, feel free to ask.
@dosu-bot. high-level api index.as_chat_engine()
is failing, please give low-level API for chat for azure openai model using settings.
š¤
Based on the context provided, it seems like you're encountering an issue with the high-level API index.as_chat_engine()
. As a workaround, you can use the low-level APIs to set up the chat engine for the Azure OpenAI model in LlamaIndex version 0.10.6. Here's how you can do it:
AzureOpenAIMultiModal
class by providing the necessary parameters, including the engine
parameter which corresponds to the custom name you chose for your deployment when you deployed a model on Azure OpenAI. Ensure you have set the required environment variables (OPENAI_API_VERSION
, AZURE_OPENAI_ENDPOINT
, and AZURE_OPENAI_API_KEY
).from llama_index.legacy.multi_modal_llms.azure_openai import AzureOpenAIMultiModal
# Initialize the AzureOpenAIMultiModal with necessary parameters
azure_openai = AzureOpenAIMultiModal(
engine="your_engine_name", # Replace with your engine name
azure_endpoint="https://YOUR_RESOURCE_NAME.openai.azure.com/", # Replace with your Azure endpoint
use_azure_ad=True # Set to True if using Microsoft Entra ID for token authentication
)
Authentication: The class handles authentication automatically using the provided use_azure_ad
flag. If use_azure_ad
is set to True
, it will use Microsoft Entra ID (formerly Azure AD) for token authentication. The _get_credential_kwargs
method prepares the necessary keyword arguments for authentication, including refreshing the Azure AD token if needed.
Making Requests: To make requests to the Azure OpenAI API, you can use the low-level httpx.Client
directly with the credentials and endpoints configured in the AzureOpenAIMultiModal
instance. You will need to manually construct the request payload according to the Azure OpenAI API documentation and use the http_client
attribute of your AzureOpenAIMultiModal
instance to send requests.
import httpx
# Example payload for a chat completion request
payload = {
"model": "your_model_name", # e.g., "gpt-3.5-turbo"
"prompt": "Your chat prompt here",
"max_tokens": 150,
# Add other necessary parameters according to the Azure OpenAI API documentation
}
# Use the http_client from the AzureOpenAIMultiModal instance to make the request
response = azure_openai.http_client.post(
url=f"{azure_openai.azure_endpoint}completions", # Adjust the URL path if necessary
headers={"Authorization": f"Bearer {azure_openai.api_key}"},
json=payload
)
# Parse the response
if response.status_code == 200:
data = response.json()
print(data) # Or handle the response data as needed
else:
print(f"Request failed: {response.text}")
This approach allows you to manually handle chat requests and responses using the Azure OpenAI API, bypassing the high-level API issues you encountered. Remember to replace placeholders like "your_engine_name"
, "https://YOUR_RESOURCE_NAME.openai.azure.com/"
, and "your_model_name"
with your actual deployment details.
As for your question about the role of the index.as_chat_engine()
high-level API in the LlamaIndex codebase, it's not explicitly mentioned within the provided context. However, it can be inferred that the role of a high-level API resembling index.as_chat_engine()
would be to facilitate the creation or interaction with a chat engine instance, specifically the ContextChatEngine
class in this case. The ContextChatEngine
class is designed to use a retriever to fetch context, set this context in the system prompt, and then utilize a Large Language Model (LLM) to generate a response, aiming for a fluid chat experience. This involves retrieving relevant information based on the user's message, processing this information, and then generating a response that is contextually aware, leveraging the capabilities of the underlying LLM and the retrieved context to provide meaningful interactions.
You can find more details in the llama-index-core/llama_index/core/chat_engine/context.py file.
I hope this helps! If you have any more questions or need further clarification, feel free to ask.
@dosu-bot. Why are you recommending AzureOpenAIMultiModal? It does not use index that I have... Please relook your response
š¤
I apologize for the confusion. It seems there might have been a misunderstanding. As the maintainer of LlamaIndex, I strive to provide accurate and relevant recommendations. Could you please provide more details about your specific requirements and the context in which AzureOpenAIMultiModal was recommended? This will help me better understand your needs and provide you with the most appropriate guidance.
@dosu-bot, OK.
Sharing the below code
from llama_index import OpenAI, SimpleDirectoryReader, VectorStoreIndex
llm = OpenAI(model="gpt-3") # Use an available model
data = SimpleDirectoryReader(input_dir="./data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
chat_engine = index.as_chat_engine(chat_mode="best", llm=llm, verbose=True)
response = chat_engine.chat("What are the first programs Paul Graham tried writing?")
print(response)
It is failing and raise ValueError(f"Tool with name {name} not found").
I am looking for work around to get low-level APIs instead of index.as_chat_engine()
š¤
Based on the context provided, it seems like you're encountering a ValueError: Tool with name functions not found
error when trying to set up a chat engine using the index.as_chat_engine()
method in LlamaIndex version 0.10.6. As a workaround, you can use the low-level APIs to directly instantiate one of the chat engine classes provided in the llama_index.core.chat_engine
module.
Here's an example of how you can set up the SimpleChatEngine
:
from llama_index.core.chat_engine.simple import SimpleChatEngine
# Instantiate the SimpleChatEngine
chat_engine = SimpleChatEngine()
# Use the chat engine as needed
# For example, to process a message:
response = chat_engine.process_message("Your message here")
print(response)
You can replace SimpleChatEngine
with any of the other available chat engines (CondenseQuestionChatEngine
, ContextChatEngine
, CondensePlusContextChatEngine
) depending on your specific requirements.
This approach bypasses the need to use a higher-level API that might be causing the ValueError: Tool with name functions not found
error, by directly utilizing the classes defined for chat engine functionality.
Unfortunately, I couldn't find specific documentation detailing the differences between the four chat engine classes (SimpleChatEngine
, CondenseQuestionChatEngine
, ContextChatEngine
, CondensePlusContextChatEngine
) in the llama_index.core.chat_engine
module. It's possible that the differences lie in how each class processes and responds to messages, but without more information, it's difficult to say for certain.
If you encounter any issues or need further clarification, please provide more details so I can assist you better.
Bug Description
I am using version 0.10.6, using
Settings
as per documentation instead ofServiceContext
with Azure OpenAI. it is working well forindex.as_query_engine
.However, when I changed to chate engine, as below
Version
0.10.6
Steps to Reproduce
follows the steps of this documentation page
Relevant Logs/Tracbacks