cortexproject / cortex

A horizontally scalable, highly available, multi-tenant, long term Prometheus.
https://cortexmetrics.io/
Apache License 2.0
5.47k stars 795 forks source link

query-frontend: Response caching should work for subqueries and part queries. #2178

Open bwplotka opened 4 years ago

bwplotka commented 4 years ago

AC:

I think that should not be that hard to implement, but we would need to finally parse the query and understand what is nested.

cc @brancz, @pracucci @tomwilkie

pracucci commented 4 years ago

@owen-d This is your area of expertise. What's your take, especially considering the recently merged #1878 ?

bwplotka commented 4 years ago

We are happy to help in free time, but ideas/suggestions are very welcome.

Or if any work happened already for this (:

bwplotka commented 4 years ago

Any update on this? :hugs:

Any pointers for contributors, e.g if we would want to tackle it?

owen-d commented 4 years ago

I'm not sure if this will be straightforward to implement. We currently sequence a set of middlewares in the query frontend, of which a caching middleware is one. This makes it hard to pull out part of a query without dealing with the promql Engine, which is what we did in the sharding work (https://github.com/cortexproject/cortex/pull/1878). This could be another reason to support a refactoring of the query frontend though. There are a number of issues derived from sequencing middlewares this way, such as https://github.com/cortexproject/cortex/issues/1882

Caching middleware: https://github.com/cortexproject/cortex/blob/master/pkg/querier/queryrange/results_cache.go

Another thought - this is something we may want to solve via some sort of query planner.

bwplotka commented 4 years ago

Yes, definitely worth to look closer in to different design then. I would say initial promql parse might be necessary...

On Tue, 14 Apr 2020, 20:19 Owen Diehl, notifications@github.com wrote:

I'm not sure if this will be straightforward to implement. We currently sequence a set of middlewares in the query frontend, of which a caching middleware is one. This makes it hard to pull out part of a query without dealing with the promql Engine, which is what we did in the sharding work (

1878 https://github.com/cortexproject/cortex/pull/1878). This could be

another reason to support a refactoring of the query frontend though. There are a number of issues derived from sequencing middlewares this way, such as #1882 https://github.com/cortexproject/cortex/issues/1882

Caching middleware: https://github.com/cortexproject/cortex/blob/master/pkg/querier/queryrange/results_cache.go

Another thought - this is something we may want to solve via some sort of query planner.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/cortexproject/cortex/issues/2178#issuecomment-613633294, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABVA3OZ54MLVMWDAOPDDLOLRMSZKPANCNFSM4K2KDDZA .

bwplotka commented 4 years ago

Something to add as well : https://github.com/thanos-io/thanos/issues/2569

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had any activity in the past 60 days. It will be closed in 15 days if no further activity occurs. Thank you for your contributions.

pracucci commented 4 years ago

Still valid

pracucci commented 4 years ago

Still valid

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had any activity in the past 60 days. It will be closed in 15 days if no further activity occurs. Thank you for your contributions.

pracucci commented 3 years ago

Still valid

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had any activity in the past 60 days. It will be closed in 15 days if no further activity occurs. Thank you for your contributions.

jtlisi commented 3 years ago

still valid

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had any activity in the past 60 days. It will be closed in 15 days if no further activity occurs. Thank you for your contributions.

bboreham commented 3 years ago

Definitely still valid.

Example: you can write an instant query like count(up)[60m:1m] which is basically the same thing as a range-query of count(up) over 60m with step 1m. Query-frontend will cache the latter but not the former.

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had any activity in the past 60 days. It will be closed in 15 days if no further activity occurs. Thank you for your contributions.

midnightconman commented 3 years ago

Still valid

On Fri, Oct 22, 2021, 08:24 stale[bot] @.***> wrote:

This issue has been automatically marked as stale because it has not had any activity in the past 60 days. It will be closed in 15 days if no further activity occurs. Thank you for your contributions.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/cortexproject/cortex/issues/2178#issuecomment-949731438, or unsubscribe https://github.com/notifications/unsubscribe-auth/AANH4O4HWTAGGF762SCJJNTUIF62NANCNFSM4K2KDDZA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had any activity in the past 60 days. It will be closed in 15 days if no further activity occurs. Thank you for your contributions.