trallnag / prometheus-fastapi-instrumentator

Instrument your FastAPI with Prometheus metrics.
ISC License
948 stars 84 forks source link

Instrument latency without streaming duration #291

Closed dosuken123 closed 7 months ago

dosuken123 commented 7 months ago

What does this do?

Add a brief description of what the feature or update does.

We want to add an option to track HTTP response duration without streaming duration.

Config Example:

    instrumentator.add(
        metrics.latency(
            should_include_handler=True,
            should_include_method=True,
            should_include_status=True,
            buckets=(0.5, 1, 2.5, 5, 10, 30, 60),
        ),
        metrics.latency(
            metric_name="http_request_duration_without_streaming_seconds",
            should_include_handler=True,
            should_include_method=True,
            should_include_status=True,
            buckets=(0.5, 1, 2.5, 5, 10, 30, 60),
            should_exclude_streaming_duration=True,               # <= New option
        )
    )

https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/ai_gateway/app.py?ref_type=heads#L51-58

Output example:

# HELP http_request_response_start_duration_seconds Duration of HTTP requests in seconds
# TYPE http_request_response_start_duration_seconds histogram
http_request_response_start_duration_seconds_bucket{handler="/v2/code/generations",le="0.5",method="POST",status="2xx"} 0.0
http_request_response_start_duration_seconds_bucket{handler="/v2/code/generations",le="1.0",method="POST",status="2xx"} 1.0
http_request_response_start_duration_seconds_bucket{handler="/v2/code/generations",le="2.5",method="POST",status="2xx"} 1.0
http_request_response_start_duration_seconds_bucket{handler="/v2/code/generations",le="5.0",method="POST",status="2xx"} 1.0
http_request_response_start_duration_seconds_bucket{handler="/v2/code/generations",le="10.0",method="POST",status="2xx"} 1.0
http_request_response_start_duration_seconds_bucket{handler="/v2/code/generations",le="30.0",method="POST",status="2xx"} 1.0
http_request_response_start_duration_seconds_bucket{handler="/v2/code/generations",le="60.0",method="POST",status="2xx"} 1.0
http_request_response_start_duration_seconds_bucket{handler="/v2/code/generations",le="+Inf",method="POST",status="2xx"} 1.0
http_request_response_start_duration_seconds_count{handler="/v2/code/generations",method="POST",status="2xx"} 1.0
http_request_response_start_duration_seconds_sum{handler="/v2/code/generations",method="POST",status="2xx"} 0.6706487989995367
# HELP http_request_response_start_duration_seconds_created Duration of HTTP requests in seconds
# TYPE http_request_response_start_duration_seconds_created gauge
http_request_response_start_duration_seconds_created{handler="/v2/code/generations",method="POST",status="2xx"} 1.7095186511967359e+09

Why do we need it?

Users often feel the latency as the first chunk arrival instead of the last chunk arrival as LLM inference APIs usually support HTTP streaming to improve the UX. We want to instrument the duration.

Who is this for?

GitLab, software developers, LLM app optimizations

Linked issues

Related to https://gitlab.com/gitlab-com/runbooks/-/merge_requests/6928#note_1796949998

Reviewer notes

Add special notes for your reviewer.

dosuken123 commented 7 months ago

PR is opened in https://github.com/trallnag/prometheus-fastapi-instrumentator/pull/290