microsoft / sample-app-aoai-chatGPT

Sample code for a simple web chat experience through Azure OpenAI, including Azure OpenAI On Your Data.
MIT License
1.44k stars 2.15k forks source link

Response with Stream enabled stops too early #979

Closed Opitzy closed 3 days ago

Opitzy commented 4 days ago

Describe the bug I have set up my Azure OpenAI Endpoint and AzureCognitiveSearch Endpoint in this sample app. As soon as I get a large response and have activated Stream, the response suddenly stops and the response is cut off.

As soon as I deactivate the stream, the complete response appears. A clear and concise description of what the bug is.

To Reproduce Steps to reproduce the behavior:

  1. Setup the Sample App with a Azure OpenAI & AzureCognitiveSearch Endpoint
  2. Ask a question where you expect a larger response, in my case the whole Response has 49 lines and 3730 chars. In our case its a question about a documentation
  3. With stream enabled the response will stop too early
  4. With stream disabled you will get the full response

Expected behavior Even with the stream enabled, I expect the complete answer and not that the stream stops too early. Also in connection with a long answer & the AzureCognitiveSearch.

Configuration: Please provide the following

abhahn commented 3 days ago

Hi @Opitzy , thanks for reaching out about your issue.

I am seeing a very similar issue posted here in the OpenAI forums which looks like it might potentially apply here, since we are using the same method described in the issue. Unfortunately it has no answers at this time, but I'm wondering if you could try using the normal chat.completions.create method to see if it improves the streaming use case.

For some context, here is where we are currently making the call to chat completions. We use the raw response method to have access to headers that are returned from the service. In this case, we were interested in capturing apim-request-id, which is useful for debugging issues with the service when there are exceptions, but for the purposes of comparison you could provide a dummy string value for this.

My goal here is to determine if there is a problem in the service, the SDK or the webapp itself, so I want to rule out an SDK issue first, especially since it seems that this may be an issue encountered before.

Opitzy commented 3 days ago

Hey @abhahn, thanks for the quick response.

I just checked that out to use chat.completions.create instead of chat.completions.wih_raw_response.create but sadly that didn't changed the behavior.

So i guess I'm on your side that this is potentially a problem with the SDK while looking at the Open AI Issue.

In my opinion we can close that here for now, if i get any news that would infect this WebApp i will open a new one :)

Thanks for the fast support!