microsoft / sample-app-aoai-chatGPT

Sample code for a simple web chat experience through Azure OpenAI, including Azure OpenAI On Your Data.
MIT License
1.57k stars 2.43k forks source link

Using data source results in ignored system message #85

Closed FergusKidd closed 1 year ago

FergusKidd commented 1 year ago

The system message seems to be completely ignored as soon as a data input is given, meaning the persona of the bot goes back to default.

Simple system messages such as "you reply in only one word" work fine until a dataset is attached, at which point the default response goes back to 'Hello! I'm here to help you with any questions you may have. Please feel free to ask anything.' no matter what the system message.

gingergenius commented 1 year ago

I tested it and the model was aware of the system message I gave it, but did not adhere to it. The message gets passed to the model at least.

FergusKidd commented 1 year ago

With no data linked:

Screenshot 2023-07-19 at 18 11 29

Behaviour as instructed

With data linked:

Screenshot 2023-07-19 at 18 11 03

The only difference is the way the API is called, with data being called through: "https://{AZURE_OPENAI_RESOURCE}.openai.azure.com/openai/deployments/{AZURE_OPENAI_MODEL}/extensions/chat/completions?api-version={AZURE_OPENAI_PREVIEW_API_VERSION}"

There must be some additional instructional text given which is perverting the performance of the system message being given by the user. Relates to a lack of understanding in what is actually being called when that endpoint is used?

related: @gingergenius https://github.com/microsoft/sample-app-aoai-chatGPT/issues/84

sarah-widder commented 1 year ago

Hi @FergusKidd @gingergenius when using data the system message provides guidance to the model but it will not strictly adhere to every instruction. See more information in the documentation: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/use-your-data#system-message

zafissa commented 9 months ago

I know this is closed but I have resolved this (looking at the Azure OpenAI payload). Add an additional parameter - roleInformation to the call going to the open AI service.

AdamMiltonBarker commented 2 months ago

This is crazy, we have large system prompts that define how our AI works, everything is ignored, even when using roleInformation. We have for instance specific instructions on the steps to take in a customer support case (Which our existing system handles perfectly), and it will completely ignore that and just keep responding with the same answer, the only way we can resolve this is to put all of our system prompts in the user prompt which is a security vulnerability. Two days integrating this feature to find out it is useless.

We then thought to remove the call from the GPT call, extract the data before hand directly from AI Search (semantic) and pass it ourselves, and AI Search doesn't find the right documents if we do that, even if we ask it specific questions that are 100% in the document it just returns no answers and all of the documents in the index.

This is the result of a conversation flow due to the system prompt being ignored, any follow up questions about something mentioned in the conversation it just continually repeats the same answer. You can see the reasoning we implemented is working, the AI comes to the correct understanding but then just repeats exactly what it said in the last message.

We have lost all of the character of our AI by integrating the service, it just continues to repeat itself over and over

image

image

AdamMiltonBarker commented 2 months ago

For anyone else that faces the issue above the only solution is not use BYOD, use the AI search as a separate service and use a combination of intent classification to track the context and AI search to retrieve the data, then send it to the LLM as normal in your system prompt. It is not possible to use BYOD for systems that rely on the system prompt as it just gets wiped out.

image