rycus86 / prometheus_flask_exporter

Prometheus exporter for Flask applications
https://pypi.python.org/pypi/prometheus-flask-exporter
MIT License
646 stars 161 forks source link

Lambda functions on metrics page #157

Closed Evolter closed 1 year ago

Evolter commented 1 year ago

Hi, I'm trying to figure out why lambda functions are showing as label values and how to hide it.

When using custom metrics with lambda functions:

example flask app with metrics ```python from flask import Flask, request from prometheus_flask_exporter import PrometheusMetrics app = Flask(__name__) metrics = PrometheusMetrics(app, export_defaults=False) @app.route('/') def index(): return 'Hello world!' metrics.register_default( metrics.counter( 'by_path_counter', 'Request count by request paths', labels={'path': lambda: request.path} ), metrics.histogram( 'requests_by_status_and_path', 'Request latencies by status and path', labels={'status': lambda r: r.status_code, 'path': lambda: request.path} ), ) app.run(host='0.0.0.0', port=8000) ```

I expect to see only actual metrics, e.g.:

expected values ``` requests_by_status_and_path_bucket{le="0.005",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="0.01",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="0.025",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="0.05",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="0.075",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="0.1",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="0.25",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="0.5",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="0.75",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="1.0",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="2.5",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="5.0",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="7.5",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="10.0",path="/",status="200"} 1.0 requests_by_status_and_path_bucket{le="+Inf",path="/",status="200"} 1.0 ```

but instead it returns metrics and lambda references (<function <lambda> at ...>):

unexpected values ``` # HELP by_path_counter_total Request count by request paths # TYPE by_path_counter_total counter by_path_counter_total{path=" at 0x7fe772192cb0>"} 0.0 # HELP by_path_counter_created Request count by request paths # TYPE by_path_counter_created gauge by_path_counter_created{path=" at 0x7fe772192cb0>"} 1.6825010361914413e+09 # HELP requests_by_status_and_path Request latencies by status and path # TYPE requests_by_status_and_path histogram requests_by_status_and_path_bucket{le="0.005",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="0.01",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="0.025",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="0.05",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="0.075",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="0.1",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="0.25",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="0.5",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="0.75",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="1.0",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="2.5",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="5.0",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="7.5",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="10.0",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_bucket{le="+Inf",path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_count{path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 requests_by_status_and_path_sum{path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 0.0 # HELP requests_by_status_and_path_created Request latencies by status and path # TYPE requests_by_status_and_path_created gauge requests_by_status_and_path_created{path=" at 0x7fe7721937f0>",status=" at 0x7fe772193760>"} 1.6825010361915133e+09 ````

Is this expected behaviour? Is there an easy way to hide it that I missed?

rycus86 commented 1 year ago

Hm, that's certainly unexpected, but can't see immediately what's the problem. Thanks for the example, I'll try having a look later this week or this weekend to figure out what could be the problem.

Evolter commented 1 year ago

It looks like it was introduced in the 0.21.0, before that (e.g. 0.20.3) lambdas are not present in metrics as expected.

rycus86 commented 1 year ago

Thanks for checking! That version had this notable change: https://github.com/rycus86/prometheus_flask_exporter/pull/145 That seems to change some things around label init, so could be related. It looks like that PR also introduced a flag for labels, have you tried that by any chance? I still think it should work by default, and will have a look later this week when I get a chance.

Evolter commented 1 year ago

Thank you! Adding initial_value_when_only_static_labels=False to each metric did help. Though if possible it would be more intuitive if it didn't try to use an initial value for lambda labels and/or use value like 0 instead.

rycus86 commented 1 year ago

I think this change will fix it: https://github.com/rycus86/prometheus_flask_exporter/commit/bc5b4b2e2aa355ca5dcfb3c8c01684c0a03e9d48 I'll release it as version 0.22.4 in a bit.