thanos-io / thanos

Highly available Prometheus setup with long term storage capabilities. A CNCF Incubating project.
https://thanos.io
Apache License 2.0
13.05k stars 2.09k forks source link

Ruler not evaluating any rules #4772

Open jessicalins opened 3 years ago

jessicalins commented 3 years ago

Thanos version used: Thanos v0.23.1, deployed as a sidecar

Object Storage Provider: S3

What happened:

What you expected to happen:

Anything else we need to know: Screenshots that may help debugging the issue:

image image image image image image

After restarting the pods: image

bwplotka commented 3 years ago

This is epic report - @jessicalins thank you! Perfect pattern for providing all possible info 🤗

GiedriusS commented 2 years ago

What about ${HTTP_IP}:${HTTP_PORT}/debug/pprof/goroutine?debug=1 of Rule when this happens? Could you please upload it? I'm still not sure what happened here.

bwplotka commented 2 years ago

Yup, too late ): Good point about go routines - we forgot. Let's capture it next time it happens. We lost all pprof things 🤗

jleloup commented 2 years ago

I happen to have the same behaviour on my clusters since the v0.23.1 update.

Rollbacking Thanos Ruler to 0.22.0 did the trick for us to work around this issue.

Maybe I can help providing those pprof. I don't have experience with that right now but I can try the request you provided @GiedriusS

Edit: I spoke too soon: rollbacking to 0.22.0 didn't helped that much actually. We got our records values for some time but now those are missing again.

jleloup commented 2 years ago

thanos-rule.pprof.txt

This is from one of our Thanos Ruler currently failing to process some records (I haven't figured out yet if this applies to all of them or not). Version 0.22.0.

I'm waiting to have some failures from a 0.23.1 ruler.

jleloup commented 2 years ago

One lead we are testing right now for this issue: fine tuning Thanos Query & Query Frontend. We have increased some concurrency parameters & the likes to ensure that there is no bottleneck on query path that would slow down Thanos Ruler queries.

It is yet a bit too soon to draw any conclusions though as of now our thanos rulers records are way more stable

jleloup commented 2 years ago

Update: increasing Thanos Query performances helped for some time but eventually our Thanos Rule instances ends up evaluating no rules at all. The only thing I can add is that the number of goroutines increases a lot when Thanos Rulers stops evaluating

Screen Shot 2021-12-13 at 16 20 54

So I suppose somethings clogs Thanos Ruler at some point and those goroutines never ends properly.

jmichalek132 commented 2 years ago

We hit this too in one of our clusters with ruler version 0.23.1 and the same pattern (the increase of number of goroutines over time). I am unfortunately not able to provide pprof, because priority when this was discovered was to mitigate so we restarted all the pods. Could it be however possible that this is caused by similar issue as https://github.com/thanos-io/thanos/pull/4795 ?

jmichalek132 commented 2 years ago

@jleloup We didn't encounter these kind of issue with v0.21.1 so I am going to rollback ruler to that version.

GiedriusS commented 2 years ago

v0.23.2 contains the fix for https://github.com/thanos-io/thanos/pull/4795 so I'd suggest trying that out to see whether it helps (:

jmichalek132 commented 2 years ago

@GiedriusS thanks for the quick response, quick question on that is that code path executed in ruler mode?

GiedriusS commented 2 years ago

Ruler executes queries using the same /api/v1/query_range API and that API might not return any responses due to https://github.com/thanos-io/thanos/pull/4795. So, I think what happens in this case is that the Prometheus ruler manager continuously still tries to evaluate those alerting/recording rules but because no response is retrieved from Thanos, the memory usage stays more or less the same. :thinking:

jmichalek132 commented 2 years ago

In our case

Ruler executes queries using the same /api/v1/query_range API and that API might not return any responses due to #4795. So, I think what happens in this case is that the Prometheus ruler manager continuously still tries to evaluate those alerting/recording rules but because no response is retrieved from Thanos, the memory usage stays more or less the same. 🤔

That might be what happened in our case, we upgrade all thanos components to v0.23.1 from v0.21.0. We noticed some query performance degradation (at the same time the ruler in one cluster got stuck this way), we downgraded the thanos query instances, but not the ruler instances, and we didn't notice this ruler being stuck in this state until now.

ahurtaud commented 2 years ago

Hello, I think I have the same issue with 0.24. Can other confirm? I also commented on https://github.com/thanos-io/thanos/issues/4924 which may be a duplicate....

phoenixking25 commented 2 years ago

Facing in 0.24 as well

stale[bot] commented 2 years ago

Hello 👋 Looks like there was no activity on this issue for the last two months. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗 If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

panchambaruahwise commented 2 years ago

This issue is still being observed in thanos:v0.24.0