dotnet / aspire

An opinionated, cloud ready stack for building observable, production ready, distributed applications in .NET
https://learn.microsoft.com/dotnet/aspire
MIT License
3.71k stars 426 forks source link

Improved rendering for Gen AI Telemetry signals in Aspire Dashboard #5902

Open sudivate opened 1 week ago

sudivate commented 1 week ago

Is there an existing issue for this?

Is your feature request related to a problem? Please describe the problem.

As an AI engineer developing Generative AI applications locally using VS Code, I aim to instrument and visualize OpenTelemetry (OTEL) signals captured within my app through the Aspire dashboard. Currently, I use the Aspire dashboard to monitor all telemetry signals, particularly to visualize end-user interactions by tracking captured input and output tokens. However, the current version of the Aspire dashboard lacks a user-friendly experience when rendering large text, making analysis difficult. Improving this experience, especially in rendering telemetry signals from AI interactions, would significantly enhance usability and analysis for Gen AI Apps. image

Describe the solution you'd like

A solution to this problem would be to generate a tree view that organizes all the LLM spans within the same trace or session (correlation ID) and present them in a conversational format. This structure would make it easier to follow the flow of interactions, with each span displayed sequentially alongside its associated metadata, improving clarity and allowing for better analysis of the captured telemetry signals.

Below are representational sample of user-friendly rendering from market solutions Arize image

Langfuse image

@leslierichardson95 @truptiparkar7

Additional context

No response

LadyNaggaga commented 1 day ago

To make this work here are renders we would like to be supported in the dashboard:

  • [ ] A card that shows: number of tokens used, Latency, Eval metrics showing up has badges the show relevancy, coherence and correctness image
  • [ ] Latency response can be viewed in graph that would it nice but not major priorty

image

  • [ ] Streaming responses back from LLM and showing resources used. In the screenshot from Arize shows chat history experiences. I would like to propose is that in dashboard we extend the current table that developer gets when they click a resource they see the following in a meaningful output :
    • [ ] Input ( System prompt and user prompt)
    • [ ] Model used
    • [ ] Generated responses

For example (not real from anywhere mocked in lists) image

  • [ ] An icon to easily identify your AI resources. Looking for resources in Aspire today can be overwhelming to the eye especially when you dozens for resources. Could add a :sparkles: in the structured logos for AI resources similar for what you do for databases today?