Closed alkampfergit closed 11 months ago
I'd like to have debug to output more information about prompt used, etc.
Anything that might contain sensitive information (like a prompt) is traced at Trace level, which is the intended level for such information: https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.loglevel "Logs that contain the most detailed messages. These messages may contain sensitive application data. These messages are disabled by default and should never be enabled in a production environment."
Is there something in particular that's not getting logged that you'd hope would be logged, regardless of the level at which it's being logged? Or is there something getting logged at Trace level that you believe is more appropriate for a different level?
If you look at langsmith for LangChain you will see that you have a complete log of all interaction with the language model.
I'd like to have log of what is happening, like
Called LLM now invoke plug-in X with parameter Y, now called LLM again, then invoked plug-in Z with parameters K and so on.
Il sab 9 dic 2023, 18:18 Stephen Toub @.***> ha scritto:
I'd like to have debug to output more information about prompt used, etc.
Anything that might contain sensitive information (like a prompt) is traced at Trace level, which is the intended level for such information:
https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.loglevel "Logs that contain the most detailed messages. These messages may contain sensitive application data. These messages are disabled by default and should never be enabled in a production environment."
Is there something in particular that's not getting logged that you'd hope would be logged, regardless of the level at which it's being logged? Or is there something getting logged at Trace level that you believe is more appropriate for a different level?
— Reply to this email directly, view it on GitHub https://github.com/microsoft/semantic-kernel/issues/4129#issuecomment-1848586134, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACXREPWJ7AYLA7VDZO2JVTYISMNJAVCNFSM6AAAAABAN5EVH2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBYGU4DMMJTGQ . You are receiving this because you authored the thread.Message ID: @.***>
I'd like to have log of what is happening, like Called LLM now invoke plug-in X with parameter Y, now called LLM again, then invoked plug-in Z with parameters K and so on.
And that's what you get with LogLevel.Trace, no? What specifically is missing? We should certainly add any important information that's missing, but it's unlikely anything potentially sensitive will be added at a level higher than Trace.
I'd like to know at least what the orchestrator is doing like
Basically I'm expecting in debug log level to have an high level understanding of what it is doing under the hood with my plugin. I'd like to have also parameters, but I can understand that these detail are sensitive, but during development they are invaluable.
We currently log the function name with the function arguments at trace level (because function arguments are considered sensitive info). We can split the log message into two parts: one is a message with function name at info level and the other is a message with the argument at trace level.
Could you elaborate on the second bullet point? @alkampfergit
In the second point I'd like to know what happens when SK asks GPT what to do next.
As for the sample I can use AutoInvokeKernelFunctions
OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()
{
FunctionCallBehavior = FunctionCallBehavior.AutoInvokeKernelFunctions
};
I'd like to view on the log when SK ask to LLM what to do next, which are the context variable, and so on. I've an example where I have 3 plugins, one extract audio from video, the second one create a transcription and the third summarize the transcription.
Actually I'd like to look the logs an understand the flow, something like
And so on. Basically I'd like to be able to look at the flow of thought that is done to solve the user question. It would be super nice to have some form of output parsable like in json.
With langchain and Langsmith we have a super-nice way to examine what happened so I can troubleshoot my plugin (maybe my description is poor, etc etc)
Semantic kernel provides telemetry, including traces, logs, and metrics for applications to consume. It is up to the developers to decide the telemetry service they would like to use, as long as the service works with OpenTelemetry.
You can refer to this example on how to set up Application Insights for an application with SK: https://github.com/microsoft/semantic-kernel/tree/main/dotnet/samples/TelemetryExample
Here are some screenshots for a simple application that is using function calling: In the screenshots, you can see the function execution sequence and the time span of each function. You can also view the traces for more details or even use log analytics to view detailed logs. For example:
Note that since we consider function inputs & outputs as sensitive data (as they may contain user information), you will have to set the log level to Trace
to see those information.
Thanks for the detail.
Closing, feel free to reopen if there are further questions.
I configured my kernel with this simple instruction and I choose to have Debug level.
Configuration just taken from post https://devblogs.microsoft.com/semantic-kernel/release-candidate-1-for-the-semantic-kernel-net-sdk-is-now-live/
But when I launch the program I have no debug level, just info. It is a simple c# console application with no ASP.NET or any other functionalities, just a plain console app to demonstrate the basic capabilities of SK.
Here is the output where I have only info level, I'm expecting to have debug with the sample prompt sent to my language model.
I'd like to have debug to output more information about prompt used, etc.
If I Set the level to trace I'm able to seee trace level, but no debug, it seems that in RC Semantic Kernel is doing less logging that before.