Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view
This PR implements the response latency metric, which measures the time taken to complete LLM and tool calls. It provides insights into the efficiency and performance of response generation within the system.
Related Issue
None (new feature implementation).
Type of Change
[x] New feature (non-breaking change which adds functionality)
How Has This Been Tested?
Created unit tests using pytest to validate the calculation of response latency metrics with different scenarios, including valid data, partial data, and edge cases.
Tested manually with sample trace JSON to ensure the metric accurately calculates average latency, median latency, and other statistical measures.
Checklist:
[x] My code follows the style guidelines of this project
[x] I have performed a self-review of my own code
[x] I have commented my code, particularly in hard-to-understand areas
[x] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
[x] I have added tests that prove my fix is effective or that my feature works
[x] New and existing unit tests pass locally with my changes
[x] Any dependent changes have been merged and published in downstream modules
Additional Context
The response latency metric includes detailed statistics such as average latency, minimum latency, maximum latency, median latency, P90 latency, and standard deviation. This helps identify areas for optimization in LLM and tool call performance.
Impact on Roadmap
This PR aligns with the project roadmap by enhancing the system's monitoring capabilities and providing valuable performance metrics for further optimization.
Pull Request Template
Description
This PR implements the response latency metric, which measures the time taken to complete LLM and tool calls. It provides insights into the efficiency and performance of response generation within the system.
Related Issue
None (new feature implementation).
Type of Change
How Has This Been Tested?
pytest
to validate the calculation of response latency metrics with different scenarios, including valid data, partial data, and edge cases.Checklist:
Additional Context
The response latency metric includes detailed statistics such as average latency, minimum latency, maximum latency, median latency, P90 latency, and standard deviation. This helps identify areas for optimization in LLM and tool call performance.
Impact on Roadmap
This PR aligns with the project roadmap by enhancing the system's monitoring capabilities and providing valuable performance metrics for further optimization.