microsoft / semantic-kernel

Integrate cutting-edge LLM technology quickly and easily into your apps
https://aka.ms/semantic-kernel
MIT License
20.5k stars 2.97k forks source link

.Net: [USER STORIES] Additional fiters #6598

Open matthewbolanos opened 1 month ago

matthewbolanos commented 1 month ago

To address additional scenarios, the following filters may need to be added

Invoke agent

Whenever an agent is invoked, the prompt and available plugins may be altered. For example, the agent may be in a "complete_sales_order" state. While in this state, the agent should have a tailored prompt with only plugins necessary for completing an order. This would differ if it was in a "browse_products" state where the agent would be provided plugins to search for new products.

Serialize prompt to Chat History object

Today, we only have a filter for when a prompt is rendered. If a developer has prompts that are rendered using different parser engines (e.g., Prompt flow's User: syntax or Semantic Kernel's XML format), creating a single filter to handle each of these cases would be difficult. It would instead be easier if they could analyze the final chat history object that is sent to the LLM to do things like detect PII.

Choose model and prepare execution settings for LLM

Today, we have an AI selection service. The challenge with this approach is that it differs from our other filters, and it lacks context of previous filters (e.g., the state of the agent or previous analysis of the prompt).

Respond to new message

Today, we only have a filter to handle new function call(s). A developer may also want a filter to handle new messages so that they can perform checks before the messages gets sent to the user. This could include...

If any of these checks are flagged, the developer would then want to alter the message before it gets appended to the chat history.

This is also important if a dev asks for multiple options. Within this filter, the developer could choose the option they actually want to propagate.

Respond to termination signal

Today, the function calling loop will end if we receive a termination signal (the AI gets to choose when it's "done"). The developer, however, may want to do a check to see if the AI is actually done and perform a retry to force the AI to continue doing work. For example...

dmytrostruk commented 1 month ago

It would instead be easier if they could analyze the final chat history object that is sent to the LLM to do things like detect PII.

Just for the context, chat history object is already available in auto function invocation filter.