Open sidneyhori opened 1 year ago
Adding: I just figured out now, that the ignoring system message situation happens when I have "Limit responses to your data content" flagged ON. when I turn it OFF, the bot responds to the "what is your name" question
Hello, I also have this impression, when I try to set the system_message so that when it doesn't know the answer it says: I don't know, contact Leonard I'm trying to find out if it's possible to customize this message: The requested information is not found in the retrieved data. Please try another query or topic. Apparently it is triggered in the conversation_with_data and conversation_without_data functions with "role": "assistant", "content": ""
Hi @sidneyhori, the documentation has some additional info about the system message when connecting to your data: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/use-your-data#system-message Basically, the system message/role information is guidance but the model may or may not follow it every time.
@sidneyhori and @phfontes - The "limit responses to your data" flag is intended to restrict the model from responding to questions that don't pertain or have answers in the data, so it will give the generic message whenever it determines that your question is out of domain of the data (even for questions like "what is your name"). At this time there's no way to customize that message. You could do some downstream postprocessing to replace that message in the response before it's shown to the user if you wanted, though.
Thank you @sarah-widder for the response. It was of great help. I ll work on getting my data more complete and also try semantic and vector. Best. Sidney
Hi. I also want to start by saying that on the whole, this sample code and the samples in the Azure SDK for JS are just great.
The 'limit responses to your data' i.e. inScope
functionality does seem to make it ignore any system message you send with the chat completion request, for example, messages to get it to limit the length of its responses (as distinct from maxTokens
, which is not the same thing, as it will cut it off mis sentence). If for example, I give it a system message of 'You are a helpful assistant. You will talk like a pirate.' with the inScope
flag true, and ask it something not in my documents like 'what is cotton candy?' it gives me the 'requested information is not found' response, which is what I want, but if I ask it something where it does find references, it seems to completely ignore any system messages used to set it up. Turning the inScope
flag off, you never get the '...not found' response, but it does hallucinate like a pirate, but still if it does find reference docs, then it answers normally, not following the system messages.
It seems like ostensibly the idea with the API Microsoft is providing is that it should be 'just like the openai' API, but better. System messages are an important part of manipulating the responses from the model that seem to be missing when certain other options are in use.... Where can we see the documentation for azureExtensionOptions: 'AzureCognitiveSearch' detailing parameters like inScope and what exactly it's sending to gpt from the documents, and how to manipulate that when things don't generate like we want? This sample code is great, thanks again, but I would like to see some actual documentation.
A side note, and this is why I wonder what exactly is going to gpt: if I send a message consisting of a single word that I know is in my index, e.g. 'automation', it will respond with '...not found', but querying the index itself certainly does retrieve some documents. I would have thought that it could give some overview of the information found in those 'citations'...
Hi @kitfit-dave, thanks for your feedback. system message and inScope work differently: system message doesn't affect information retrieval but changes how the model behaves and inScope limits information retrieval to only your data. Currently, what you described is expected behavior and not ideal to your use case. We will add it to our backlog to track.
For exposing parameters, we are working on it and will keep posted through our public doc.
Please let me know if you have any other questions.
How does inScope limit the information retrieval to only our data?
Here's how I imagine it working:
I assume there's an "internal" system message that we cannot influence through the API. It gives some general guidelines for the model. In general case it is something like "be friendly, helpful and avoid discriminatory / offensive topics".
In case of retrieval-augmented generation that system message also says something like "these are the documents, answer the questions based on them, output references to sources in square brackets like [doc1]".
If inScope is turned on, that internal system message is appended by something like "if the answer is not in the provided documents, say I don't know, don't try to imagine an answer".
Somewhere along the line the user-provided system message gets ignored.
Is my understanding correct?
Hello! Was there anything new?
Hi everybody! any news on this? I can confirm, as soon as you start using AzureCognitiveSearch the Systemmessage gets ignored. I´m using V2023_12_01_Preview and Nuget 13. thank you for any help!
Hi @Richard-ANX are you using the 'roleInformation' data source parameter? In short, the roleInformation is inserted into a larger system message inside the Azure OpenAI On Your Data service, which includes instructions on how to use the retrieved data to answer questions, the conversation history, and the text from the retrieved documents themselves. Because it's inserted into a larger prompt, not every instruction will be strictly followed by the model.
HI @sarah-widder ! Thank you very much for your fast reply. Yesterday here i read, that "we dont use roleinformation" anymore, and that in newest nuget systemmessage will work. . obvious that text got me on the wrong line, because adding roleInformation WORKS PERFECTLY ! thank you very much! So you also can take my concern away, that this is a deprecated feature?
Hi @Richard-ANX - roleInformation is not deprecated, and this is recommended over passing a system message. Not sure exactly how the SDK is leveraging these but I would always default to use roleInformation.
Hi @sarah-widder , i see limitations with using roleInformation param, giving a very simple single instruction works but we arent able to modify much with it, any alternative apart from this?
Hi @muskaangupta-work depending on your scenario, you can also try putting additional context around the user message before sending it in the API request from the backend. For example, you might construct a template reminding the model of the scenario/context or how you want the model to respond, and substitute in the user message each time:
"This is a conversation about generative AI. Answer the following question with a bulleted list: {user message}"
@sarah-widder I tried this option also but sometimes it mixes up the instruction with the user ask/ intent.
First of all, thank you Sara Widder and the collaborators here. This is great.
I have been experimenting for a few weeks with the OIA Studio Chat Completion and in the past few days i have experienced two things, even using exactly same data, system messages, model and parameters:
1) It seems the chat is ignoring the system message. An example is "Your name is Maxwell and you are an HR assistant." as system message, and when I ask the bot's name it says "The requested information is not found in the retrieved data. Please try another query or topic.". I have also asked it in sysmsg for it to use emojis, which it did many time, but now it stopped
2) Another behavior i have noticed is that the bot is not much less elaborative in its answers. It feels much "drier" and almost all the time I am simply getting "The requested information is not found in the retrieved data. Please try another query or topic." rather than the usual "I don't have this info XYZ but I have ABC instead".
Hope the feedback helps, I ll keep working here and figure out what might be happen as well.
tks,
Sidney.