run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.79k stars 5.28k forks source link

[Bug]: Error: NotFoundError - Resource not found while implementing llama index with Azure OpenAI for fine-tuning #12685

Closed loyal812 closed 4 months ago

loyal812 commented 7 months ago

Bug Description

During the implementation of the llama index with Azure OpenAI for fine-tuning, the following error occurred:

Retrying llama_index.llms.openai.base.OpenAI._chat in 0.6394267984578837 seconds as it raised NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}.

Version

0.10.22

Steps to Reproduce

Code snippet:

# Perform query to retrieve the contexts and answers for the generated questions

self.llm = OpenAI(
    api_base=self.azure_openai_endpoint,
    api_key=self.azure_openai_api_key,
    api_version=self.azure_openai_api_version,
    model=self.model,
    temperature=self.temperature
)

query_engine = index.as_query_engine(similarity_top_k=2, llm=self.llm)

Additional Information: The error seems to be related to a NotFound error with code 404 indicating that the requested resource was not found.

Problem Statement: The issue occurs while trying to execute the query using Azure OpenAI. The error message suggests that the resource being requested is not found.

Solution Required: Please provide guidance on how to resolve this issue as I am unsure how to proceed.

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 7 months ago

Hey there, @loyal812! Great to see you diving into LlamaIndex again. Hope you're doing well!

The NotFoundError you're encountering suggests an issue with accessing the required resource on Azure OpenAI. This could be due to several reasons, such as incorrect API credentials, an invalid model name, or the model being unavailable. Here's how you can address this:

  1. Verify API Credentials: Ensure your API key and endpoint URL are correctly set. Mistakes here are common sources of 404 errors.

  2. Check Model Name: Confirm the model name you're using is valid and available in Azure OpenAI. An incorrect or discontinued model name could lead to this error.

  3. API Version: Make sure the API version you're using supports the operations you're attempting. Incompatibilities here can cause unexpected errors.

  4. Network and Service Status: Temporary network issues or Azure OpenAI service disruptions could also be the cause. Check Azure's service status for any known issues.

  5. Correct API Endpoints: A 404 error can occur if the endpoint URL is incorrect or the resource doesn't exist. Ensure you're using the correct endpoints for your operations.

If these steps don't resolve the issue, consider contacting Azure OpenAI support for further assistance. Additionally, there was a similar issue reported in the LlamaIndex repository, where using the set_global_service_context function was suggested as a solution. You might find this approach useful:

from llama_index import set_global_service_context
set_global_service_context(merging_context)

More details on this can be found in the issue thread on GitHub.

Lastly, you're using LlamaIndex version 0.10.22, but the latest version is 0.10.28. Upgrading might resolve your issue, though it's advisable to check the release notes or changelog for specific changes that could be relevant to your problem.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
loyal812 commented 7 months ago

@dosu-bot

I've already successfully generated training and evaluation data using Azure openai, but I'm getting an error when I try to use the query engine. Could you please review this again?

dosubot[bot] commented 7 months ago

Given that you've successfully generated training and evaluation data using Azure OpenAI but are encountering issues with the query engine, it's clear the problem is more specific than initially addressed. The NotFoundError with code 404 when using the query engine suggests a more targeted issue, possibly related to how the query engine interacts with Azure OpenAI's resources or an incorrect configuration specific to the query engine setup.

Here are a few targeted steps to troubleshoot and potentially resolve this issue:

  1. Ensure Correct API Endpoint and Model Availability: Double-check that the API endpoint and the model you're attempting to use with the query engine are correct. The error could stem from an incorrect endpoint or model name, leading to the resource not found error.

  2. Verify API Key and Configuration: Confirm that your API key is correctly configured for the query engine's use. An incorrect or improperly set API key could prevent access to Azure OpenAI's resources.

  3. Resource Path and API Version Compatibility: Make sure the path to the resource you're trying to access is correct and that the API version you're using is compatible with the operations you're attempting. This is crucial for accessing specific resources or models within Azure OpenAI.

  4. Consult Azure OpenAI Documentation or Support: If the issue persists after checking the above points, consulting the Azure OpenAI documentation or reaching out to their support might provide more insight into the specific resource access issue you're facing.

Additionally, a similar issue was addressed in the LlamaIndex repository where using the set_global_service_context function was suggested as a solution. This approach was specifically mentioned in discussions and issues related to resource access errors with Azure OpenAI. Here's a quick snippet for reference:

from llama_index import set_global_service_context
set_global_service_context(merging_context)

This solution was highlighted in discussions around using the NLSQLTableQueryEngine with AzureOpenAI and might be relevant to your situation. For more detailed guidance, reviewing the specific discussions and issues on the LlamaIndex repository could provide further insights.

Please try these steps and let me know if you encounter any further issues. Providing more details about the specific Azure OpenAI resource you're trying to access with the query engine could also help in pinpointing the exact cause of the problem.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
logan-markewich commented 7 months ago

@loyal812 please double check that you've specified the correct model name that you've deployed.

Here's an example that works for me

from llama_index.llms.azure_openai import AzureOpenAI

aoai_api_key = "YOUR_AZURE_OPENAI_API_KEY"
aoai_endpoint = "YOUR_AZURE_OPENAI_ENDPOINT"
aoai_api_version = "2023-07-01-preview"

llm = AzureOpenAI(
    model="gpt-35-turbo",
    deployment_name="YOUR_AZURE_OPENAI_COMPLETION_DEPLOYMENT_NAME",
    api_key=aoai_api_key,
    azure_endpoint=aoai_endpoint,
    api_version=aoai_api_version,
)