Azure-Samples / contoso-chat

This sample has the full End2End process of creating RAG application with Prompt Flow and AI Studio. It includes GPT 3.5 Turbo LLM application code, evaluations, deployment automation with AZD CLI, GitHub actions for evaluation and deployment and intent mapping for multiple LLM task mapping.
MIT License
366 stars 2.28k forks source link

Endpoint test fails due to missing API key #132

Closed phoevos closed 2 weeks ago

phoevos commented 4 weeks ago

After completing deployment I can see the endpoint on AI Studio. Following the instructions in the README I proceed to test it with the following input:

{"question": "Tell me about hiking shoes", "customerId": "2", "chat_history": []}

The above produces an error, however, triggered during the instantiation of the AzureOpenAIModelConfiguration object. As seen in the logs:

"Execution failure in 'get_response': (InvalidConnectionError) AzureOpenAIModel parameters are incomplete. Please ensure azure_endpoint, api_version, and api_key are provided."

Although we do explicitly provide the version and endpoint here, we don't do the same for the API key. In fact, inspecting my local azd env, the value of AZURE_OPENAI_API_KEY is not set. This feels like something that should be part of the postprovision script. Diving deeper, I noticed that a relevant looking key (is it the same one?) used to be set among others here prior to the May 2024 updates. Why was this dropped?

I'm currently using the following package versions:

prompt_toolkit==3.0.45
promptflow==1.10.0
promptflow-azure==1.10.0
promptflow-core==1.10.0
promptflow-devkit==1.10.0
promptflow-evals==0.3.0
promptflow-tools==1.4.0
promptflow-tracing==1.10.0
cassiebreviu commented 2 weeks ago

this should be fixed now.