thanos-io / thanos

Highly available Prometheus setup with long term storage capabilities. A CNCF Incubating project.
https://thanos.io
Apache License 2.0
13.14k stars 2.1k forks source link

Thanos Query: gaps in deduplicated data #7656

Open ppietka-bp opened 3 months ago

ppietka-bp commented 3 months ago

Thanos, Prometheus and Golang version used: thanos, version 0.35.1 (branch: HEAD, revision: 086a698b2195adb6f3463ebbd032e780f39d2050) build user: root@be0f036fd8fa build date: 20240528-13:54:20 go version: go1.21.10 platform: linux/amd64 tags: netgo prometheus, version 2.32.1 (branch: HEAD, revision: 41f1a8125e664985dd30674e5bdf6b683eff5d32) build user: root@54b6dbd48b97 build date: 20211217-22:08:06 go version: go1.17.5 platform: linux/amd64

Object Storage Provider: Ceph

What happened: Thanos Query: gaps in deduplicated data

What you expected to happen: Two instances of prometheus scrap data from sources and another federated prometheuses on OpenShift. As long as we search the data without deduplication, the data is continuous.

Anything else we need to know: Scereens attached. prometheus2 prometheus1 thanos thanos_dedup

What you expected to happen: Deduplication should properly combine datasets. prometheus2 prometheus1 thanos thanos_dedup

ppietka-bp commented 3 months ago

Results of the investigation, Deduplication only works one way. If we deduplicate metrics according to a label, e.g. replica, which takes values of 0 or 1, missing data with label replica=‘0’ is filled in by data with replica=‘1’ but data with label replica=‘1’ is not filled in by data with label replica=‘0’.

In our opinion, deduplication should work both ways and assemble the data according to label replica so as to show a continuous set of data in the metric. dedup replica0 replica1

MichaHoffmann commented 3 months ago

Deduplicating time series data is surprisingly hard! I have no great idea how to do it properly. The approach that Thanos takes during query time is roughly that we start with some replica and then if the gap gets too large we switch over. But this had numerous edge cases in the past. I wonder how we could improve it

ppietka-bp commented 2 months ago

Thanks for your reply. I look forward to solving this agonizing problem.

MichaHoffmann commented 2 months ago

Yeah I'm happy to brainstorm about this if you have an idea!

ppietka-bp commented 2 months ago

For the moment, I still have no idea where to start and what guidelines we should adopt. except, of course, one consistent data set after deduplication. I wonder if the algorithm for deduplication on Compactor via the “--deduplication.func=penalty” applied to Querier would not solve the problem. Of course, if that's not the cause.

MichaHoffmann commented 2 months ago

Penalty is the same algorithm that the querier uses though.

lachruzam commented 2 months ago

@MichaHoffmann Wouldn't putting a configurable upper bound on the penalty solve this issue (or at least allow fixing it by configuration)?

MichaHoffmann commented 2 months ago

@MichaHoffmann Wouldn't putting a configurable upper bound on the penalty solve this issue (or at least allow fixing it by configuration)?

In the sense that we always switch replica if the gap is at least this configured size?

lachruzam commented 1 month ago

Sorry, I missed the response. We've run into a similar issue. By checking the penalty algorithm I didn't find any special treatment for the second replica. However, the penalty for switching back to the first replica is 2x bigger. Additionally there are no upper bound for it. And hence the question. Having an upper bound would allow to: