Open maksim-paskal opened 1 year ago
Hi @maksim-paskal, thanks for raising this issue.
Any chance you could export the timeseries that generated these results and share them here (assuming there is nothing confidential) so I can reproduce the issue on my machine?
Then can you also confirm the filter_good
query has a double quote after envoy_cluster_name="y
, just before ,kubernetes_namespace="z"
? Otherwise I am afraid the query might not give the expected results.
Finally, I am a bit surprised by the values returned by Prometheus. An SLI is usually computed by dividing the number of good events by the number of valid (= good + bad) events. These two numbers are usually integers. Here the logs show floating-point values. I am not a Prometheus expert but is it possible a count
or sum
is missing?
@lvaylet, thanks for quick responce.
Sory for typo in filter_good
, change it in issue description, this is a real promQL they are works sometime, sometime return this error.
My example data from Prometheus (we actualy using Thanos Query v0.26.0)
Query:
envoy_cluster_external_upstream_rq{app="x",envoy_cluster_name="y",kubernetes_namespace="z"}[1m]
Returns:
envoy_cluster_external_upstream_rq{app="x", envoy_cluster_name="y",kubernetes_namespace="z"}
253117 @1674638934.622
253125 @1674638940.809
253127 @1674638955.809
253162 @1674638970.809
253197 @1674638985.809
Query:
increase(envoy_cluster_external_upstream_rq{app="x",envoy_cluster_name="y",kubernetes_namespace="z"}[1m])
Returns:
195.75282786645047
It's some time int, sometime float in different windows, increase
and rate
in PromQL is calculating per-second metrics I think it maybe float
Thanks @maksim-paskal. I need to investigate.
For the record, what type of SLI are you computing here? Availability? Also, what does envoy_cluster_external_upstream_rq
represent? Requests with response class or response code, as hinted in this documentation?
envoy_cluster_external_upstream_rq
is a upstream counter of specific HTTP response codes (e.g., 201, 302, etc.), we plan to use it to calculate availability of our service. You can simulate this environment with this files, you need Docker:
# run prometheus and envoy
docker-compose up
# generate some records, for example with https://github.com/tsenart/vegeta
echo "GET http://localhost:10000/ready" | vegeta attack -duration=60s -output=/dev/null
open Prometheus http://127.0.0.1:9090
increase(envoy_cluster_external_upstream_rq{envoy_response_code="200"}[1d])
I think the issue is that if a new datapoint is added to prometheus' TSDB between the Good & the Valid query, you get an offset between them, leading to this kind of behaviour. You would need to send a finite timeframe to prometheus and make sure that this timeframe is further from "now" than the scrape interval of these metrics.
The only alternative is to make Prometheus perform the division and only query an SLI from it, to ensure consistency. (which may require developpement, depending on the backend current implementation.) But the downside from that is you cannot export good & bad event metrics anymore, by doing so.
In my opinion, this issue is probably similar to #343 (although with different backends)
a workaround could be use good/bad instead of good/valid
@maksim-paskal I just discussed the issue with @bkamin29 and @mveroone.
We are pretty sure this behavior is caused by the tiny delay between the two requests (one for good
and another one for valid
). They are called and executed a few milliseconds apart, resulting in the same window length but slightly different start/end times. As a consequence, as we are looking at two different time horizons, the most recent one might have a slightly different number of good/valid events. Also note that the backend itself can be busy ingesting and persisting "old" data points between the two calls, and account for more data points during the second call.
Two options to mitigate this behavior:
query_sli
method instead of the good_bad_ratio
one, and delegate the computation of the good/valid ratio to Prometheus itself. That would result in a single request and a single call. However, with this approach, you give up on the ability to export the number of good and bad events.
SLO Generator Version
v2.3.3
Python Version
3.9.13
What happened?
I am using
ServiceLevelObjective
insre.google.com/v2
with this speccalculation ends with
SLI is not between 0 and 1 (value = 1.000091)
with
DEBUG=1
it seems that for 100% SLI prometheus returns sometimefilter_good > filter_valid
the DEBUG logsWhat did you expect?
calculation of SLI return 1
Code of Conduct