Open tizki opened 4 months ago
Is it the same if you specify the timestamp? How could we reproduce this?
FIWI: since subqueries ( even with same step ) are a very different construct, it is not unimaginable that they return different results. As for the dedup ~ i dont know whats happening here.
@GiedriusS I'm using a specific timestamp. for me it reproduces any time on my environment. it's a counter metric that comes from 2 prometheus replicas.
@MichaHoffmann I tried to debug it a bit, I suspect it's something where the dedup choose which timestamps to use from the samples, but I got a little lost there. if you can refer me to where to look, I can continue debugging. (or get some relevant info to post here)
found something interesting, if I do
my_metric{_label_0="service",_label_1="requests",_label_2="timer", server="server00102"}[5m]
without deduplication in a specific timestamp I get the following: server1:
2583033 @1714665427.092
2588869 @1714665487.092
2594666 @1714665547.092
2600461 @1714665607.092
2606298 @1714665667.092
server2:
2583033 @1714665450.205
2588869 @1714665510.205
2594666 @1714665570.205
2600461 @1714665630.205
2606298 @1714665690.205
but when I do the same with deduplication, I get one more sample, the interval between the first to the second is much small (23 seconds) than the rest (60 seconds), and the diff in values is the 0, since the first sample and second are the same.
2583033 @1714665427.092
2583033 @1714665450.205 (23 seconds interval from previous, same value as previous)
2588869 @1714665510.205
2594666 @1714665570.205
2600461 @1714665630.205
2606298 @1714665690.205
it seems it takes the first from servers 1 and then the rest from server2. now it clear why adding a step solves it (removes the duplicated sample).
any idea?
I managed to reproduce this use case in a unit test. I added the following test case to TestDedupSeriesSet in dedup/iter_test.go:
{
name: "My Regression test",
isCounter: true,
input: []series{
{
lset: labels.Labels{{Name: "a", Value: "1"}},
samples: []sample{
{1714665427092, 2583033},
{1714665487092, 2588869},
{1714665547092, 2594666},
{1714665607092, 2600461},
{1714665667092, 2606298},
},
}, {
lset: labels.Labels{{Name: "a", Value: "1"}},
samples: []sample{
{1714665450205, 2583033},
{1714665510205, 2588869},
{1714665570205, 2594666},
{1714665630205, 2600461},
{1714665690205, 2606298},
},
},
},
exp: []series{
{
lset: labels.Labels{{Name: "a", Value: "1"}},
samples: []sample{{1714665427092, 2583033}, {1714665487092, 2588869}, {1714665547092, 2594666}, {1714665607092, 2600461}, {1714665667092, 2606298}},
},
},
},
The first sample doesn't get deduplicated and it takes the first samples from both sets. I noticed that in dedup.iter.go Next() there is a usage in initial penalty for the first sample. the value of that penalty is 5000. If I increase that value to 23113, my test passes. anything lower and the test fails.
I'm not sure if it's the expected behavior
Thank you for the test and the debug work! Ill look into this over weekend
So, yeah; the scrape interval is large enough that the dedup algorithm thinks that the second sample of the first iterator is actually missing and that we need to fill with the second iterator from now on. This might be a bit hard to solve since we ( right now ) dont know the proper scrape interval apriori. I think we could maybe add a configurable flag like --deduplication.penalty-scrape-interval-hint=30s
kinda flag to prime the deduplication algorithm that an initial gap of 30s does not constitute a missing sample and we can keep with the first iterator.
@GiedriusS @fpetkovski @yeya24 how does that sound?
perhaps I'm mixing, but shouldn't --query.default-step affect that penalty? also, I might be able with a PR once there will be a design for the fix.
Hi. This issue is causing us to return wrong values, which is very dangerous in our case. It puts our entire, thanos based infrastructure at risk as we can't rely on the results and act accordingly (we have automatic mechanisms that react to the results). The only workaround we're aware of is always passing the step in range functions, but it's not optimal for range queries. Can someone advice?
hi @MichaHoffmann @GiedriusS @fpetkovski @yeya24 I added a PR with a suggested fix, can you please check and let me know what do you think? thanks!
Thanos, Prometheus and Golang version used: thanos query 0.35.0 thanos sidecar 0.35.0 prometheus 2.45.0
What happened: when calculating rate for a metric, there are cases where the result if different when dedup is enabled/disabled. In addition there is the same difference if using a step [5m:1m] and not using [5m] although the default step is the same.
What you expected to happen: I expect the rate result to be the same (or much closer)
How to reproduce it (as minimally and precisely as possible): My environment has 2 prometheus servers which are replicas of each other (scraping the same targets) sidecars external labels: fqdn=, monitor="master", tenant_id="default-tenant"
I start thanos with those 2 servers as stores, replica-label=fqdn
then when querying in thanos the rate query in a specific time:
rate(my_metric{_label_0="service",_label_1="requests",_label_2="timer", server="server00102"}[5m])
using dedup and without dedup, the results are completely differentthe results are also different if querying with step (same results as without dedup)
rate(my_metric{_label_0="service",_label_1="requests",_label_2="timer", server="server00102"}[5m:1m])