Closed raeesgillani closed 5 months ago
@raeesgillani Thanks for reporting this issue!
Based on provided code and prompt, it looks like you don't need to use planner. At least, I don't see any other plugins imported, so AI can generate a plan and execute these plugins. If you want to implement chat experience using your data in Azure AI Search, instead of planner you can invoke the same prompt by using kernel.InvokePromptAsync(prompt, new KernelArguments(executionSettings))
or kernel.InvokeAsync(function, arguments)
, here is an example:
https://github.com/microsoft/semantic-kernel/blob/8140684f241022335b67d48490857a50dc48c041/dotnet/samples/Concepts/ChatCompletion/AzureOpenAIWithData_ChatCompletion.cs#L99-L105
Is it because all the AI Search parts inside Semantic Kernel are in preview? Same with the Planner?
No, even if some features are in preview/marked as experimental, they still should work. It is identifier that we may introduce breaking changes to this functionality if it needs to be improved.
Are there any limitations that I should be aware of with AI Search inside Semantic Kernel?
Based on your case, it should work as expected when using kernel
directly. One limitation is related to using AI Search and Plugins together at the same time, currently it's not supported:
https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/openai/Azure.AI.OpenAI#use-your-own-data-with-azure-openai
NOTE: The concurrent use of Chat Functions and Azure Chat Extensions on a single request is not yet supported. Supplying both will result in the Chat Functions information being ignored and the operation behaving as if only the Azure Chat Extensions were provided. To address this limitation, consider separating the evaluation of Chat Functions and Azure Chat Extensions across multiple requests in your solution design.
Thank you!
Thank you @dmytrostruk that makes sense.
Describe the bug When defining executionSettings (specifically with AzureChatExtensionsOptions inside OpenAIPromptExecutionSettings) for creating a function from prompt, then passing it into CreatePlanAsync and in turn into InvokeAsync. I am finding no search results are being returned and passed into the prompt that gets sent to the model. And the responses received are mostly 'I don't know'. This is not always the case, sometimes it works, but once it gets stuck in a cycle of 'I don't know' it just fails. Or at some points it even generates the following errors:
Unhandled exception. Microsoft.SemanticKernel.KernelException: [HallucinatedHelpers] The plan references hallucinated helpers: Helper 'join' ---> HandlebarsDotNet.HandlebarsRuntimeException: Template references a helper that cannot be resolved. Helper 'join' at HandlebarsDotNet.Helpers.MissingHelperDescriptor.Invoke(HelperOptions& options, Context& context, Arguments& arguments) at HandlebarsDotNet.Helpers.MissingHelperDescriptor.HandlebarsDotNet.Helpers.IHelperDescriptor<HandlebarsDotNet.HelperOptions>.Invoke(HelperOptions& options, Context& context, Arguments& arguments) at HandlebarsDotNet.Helpers.LateBindHelperDescriptor.Invoke(HelperOptions& options, Context& context, Arguments& arguments) at HandlebarsDotNet.Helpers.LateBindHelperDescriptor.HandlebarsDotNet.Helpers.IHelperDescriptor<HandlebarsDotNet.HelperOptions>.Invoke(HelperOptions& options, Context& context, Arguments& arguments) at lambda_method50(Closure, EncodedTextWriter&, BindingContext) at HandlebarsDotNet.HandlebarsEnvironment.<>c__DisplayClass19_0.<Compile>b__0(TextWriter writer, Object context, Object data) at HandlebarsDotNet.HandlebarsEnvironment.<>c__DisplayClass20_0.<Compile>b__0(Object context, Object data) at Microsoft.SemanticKernel.PromptTemplates.Handlebars.HandlebarsPromptTemplate.RenderAsync(Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken) at Microsoft.SemanticKernel.Planning.Handlebars.HandlebarsPlan.InvokeCoreAsync(Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken)
To Reproduce Steps to reproduce the behavior:
` // Create Kernel var kernel = Kernel.CreateBuilder() .AddAzureOpenAIChatCompletion( deploymentName: config["YourSettings:OpenAIDeploymentId"]!, endpoint: config["YourSettings:OpenAIEndpoint"]!, apiKey: config["YourSettings:OpenAIKey"]!) .Build();
}`
Example 1 of Social Value handled by X.\nClient: Client 1, Date: 2023-10-1 Example 2 of Social Value handled by X.\nClient: Client 2, Date: 2023-10-2 Example 3 of Social Value handled by X.\nClient: Client 3, Date: 2023-10-3 Example 4 of Social Value handled by X.\nClient: Client 4, Date: 2023-10-4 Example 5 of Social Value handled by X.\nClient: Client 5, Date: 2023-10-5 Example 6 of Social Value handled by X.\nClient: Client 6, Date: 2023-10-6 Example 7 of Social Value handled by X.\nClient: Client 7, Date: 2023-10-7 Example 8 of Social Value handled by X.\nClient: Client 8, Date: 2023-10-8 Example 9 of Social Value handled by X.\nClient: Client 9, Date: 2023-10-9 Example 10 of Social Value handled by X.\nClient: Client 10, Date: 2023-10-10