thanos-io / thanos

Highly available Prometheus setup with long term storage capabilities. A CNCF Incubating project.
https://thanos.io
Apache License 2.0
12.97k stars 2.08k forks source link

thanos-query: deduplication picks up time-series with missing data #981

Open Hashfyre opened 5 years ago

Hashfyre commented 5 years ago

Thanos, Prometheus and Golang version used thanos: v0.3.1 prometheus: v2.5.0 kubernetes: v1.12.6 Kubernetes Distro: KOPS weave: weaveworks/weave-kube:2.5.0 Cloud Platform: AWS EC2 Instance Type: R5.4XL

Architecture


      G1                               G2
      |                                |
      |                                |
      TQ1                              TQ2
      |                                |
 --------------                        |
 |------------|-------------------------                 
 |            |                        |
TSC1        TSC2                       TS
 |            |
P1           P2

G1: Grafana realtime G2: Grafana Historical TQ1: Thanos Query realtime (15d retention) TQ2: Thanos Query historical TSC: Thanos Sidecars TS: Thanos store

Each sidecar and the store is fronted by a service with *.svc.cluster.local DNS to which the --store flag points to.

G2, TQ2 are not involved in this RCA.

What happened Event Timelime:

Screen Shot 2019-03-25 at 7 17 00 PM

Screen Shot 2019-03-24 at 6 37 17 PM Screen Shot 2019-03-24 at 6 37 26 PM

We can clearly see that one prometheus has data and another is missing it.

Screen Shot 2019-03-26 at 10 48 26 PM

What you expected to happen

We expected thanos deduplication to trust the series that has contiguous data over the one with the missing data and produce a series with contiguous data. Missing a scrape in HA Prometheus environment is expected at times, if one of the prometheus replicas has data the final output should not show missing data.

How to reproduce it (as minimally and precisely as possible):

Environment: Underlying K8S Worker Node:

bwplotka commented 5 years ago

Hi, thanks for the report :wave:

Yea, I think this is essentially some edge case for our penalty algorithm. The code is here: https://github.com/improbable-eng/thanos/blob/master/pkg/query/iter.go#L416

The problem is that this case is pretty rare (e.g we cannot repro it). I would say adding more unit tests would be nice and help to narrow down what's wrong. Help wanted (:

MacroPower commented 4 years ago

I am having this same issue. I can actually reproduce it by having a couple prometheus instances scraping the same target, then just rebooting (recreating the pod, in my case) a single node. It will miss one or two scrapes. You'll then start to see gaps in the data if thanos happens to query the node that was rebooted.

stale[bot] commented 4 years ago

This issue/PR has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions.

brancz commented 4 years ago

@bwplotka if we had a data dump of one of these we should be able to extract the time series with the raw data that cause this no? In that case if someone would share a data dump like that that would help us a lot. If you feel it’s confidential data I think we’d also be open to accepting the data privately and extract the time series ourselves. That is if you trust us of course :)

bwplotka commented 4 years ago

Yes! we care about samples only as well so you can mask series if you want for privacy reasons! :+1: (:

stale[bot] commented 4 years ago

This issue/PR has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions.

bwplotka commented 4 years ago

Looks like this is last standing deduplication characteristic we could improve. I would not call it bug necessarily, it is just not responsive enough by design. I plan to adjust it in near future.

Looks like this is only last standing bug for offline compaction to work!

sepich commented 4 years ago

We have the same issue with v0.13.0-rc.1: Here target has been unavailable from 4:30-7:00, and this gap is ok. But we also see gaps 10:00-now. But the data is actually exist, here i'm changing zoom from 12h to 6h: image And then back to 12h zoom, but this time turn deduplication off (it is --query.replica-label=replica on querier side): I've tried to change differernt query params (like resolution, partial response etc) but only deduplication and time range having the initial gap leads to such result. So, it seems having metric stale in time range leads to gaps on each replica label change. Here is the same 6h window, moved to time of initial gap: And you see the gap after 10:00 appears on 6h window too.

stale[bot] commented 4 years ago

Hello πŸ‘‹ Looks like there was no activity on this issue for last 30 days. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! πŸ€— If there will be no activity for next week, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

stale[bot] commented 4 years ago

Closing for now as promised, let us know if you need this to be reopened! πŸ€—

omron93 commented 3 years ago

Looks like this is last standing deduplication characteristic we could improve. I would not call it bug necessarily, it is just not responsive enough by design. I plan to adjust it in near future.

@bwplotka Was this already done? Or is there a config change to workaround this issue? We see the same issue with thanos 0.18.0

omron93 commented 3 years ago

@onprem @kakkoyun Is there a way to reopen this issue? or better to create a new one?

kakkoyun commented 3 years ago

Hello πŸ‘‹ Could you please try out a newer version of Thanos to if it's still valid? Of course we could reopen this issue.

omron93 commented 3 years ago

@kakkoyun I've installed 0.21.1 and we're still seeing the same behaviour.

malejpavouk commented 3 years ago

We see the same behavior. It seems like only one instance (we have 2 Prometheus instances scraping the same targets) is taken into account and the other one is completely ignored (so dedup(A, B) == A)

thanos:v0.21.1

stale[bot] commented 3 years ago

Hello πŸ‘‹ Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! πŸ€— If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

malejpavouk commented 3 years ago

/notstale

stale[bot] commented 2 years ago

Hello πŸ‘‹ Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! πŸ€— If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

omron93 commented 2 years ago

/notstale

stale[bot] commented 2 years ago

Hello πŸ‘‹ Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! πŸ€— If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

jmichalek132 commented 2 years ago

still relevant

stale[bot] commented 2 years ago

Hello πŸ‘‹ Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! πŸ€— If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

omron93 commented 2 years ago

Still relevant

aarontams commented 2 years ago

Adding myself here to watch this issue.

clalos2592 commented 2 years ago

Adding myself here too.

jamessewell commented 1 year ago

We are seeing this issue as well. Dedup ignores a series which has no breaks in favour of one which does.

Antiarchitect commented 1 year ago

Seems like faced this too 0.29.0. Thanos Query has multiple sources and selects prometheus sidecar with data gaps on recent data. It's very strange issue holds so long.

caoimheharvey commented 1 year ago

Seems like faced this too 0.29.0. Thanos Query has multiple sources and selects prometheus sidecar with data gaps on recent data. It's very strange issue holds so long.

I've also had this issue for the same version, have been able to verify that all of the metrics are being received correctly, so the issue appears to be when the data is queried.

saikatg3 commented 7 months ago

Facing a similar issue with missing metrics in v0.32.3. The metrics are being remotely written from two Prometheus replica instances, each with unique external replica labels, into the Receiver. The Receiver utilizes multiple replicas for high availability setup. However, with deduplication enabled in Thanos query, metrics are intermittently missing in Grafana.

Screenshot 2024-02-09 at 12 16 46β€―PM