thanos-io / thanos

Highly available Prometheus setup with long term storage capabilities. A CNCF Incubating project.
https://thanos.io
Apache License 2.0
12.73k stars 2.04k forks source link

0.32.0 caused spike in network traffic #7213

Open ben-nelson-nbcuni opened 3 months ago

ben-nelson-nbcuni commented 3 months ago

Thanos, Prometheus and Golang version used:

All Thanos components are using 0.32.4 but we've tested using 0.34.1 and the issue persisted. Prometheus is on version v0.69.1.

Object Storage Provider: AWS S3 bucket

What happened: Upgrading from 0.31.0 to 0.32.0 causes a large spike in network traffic between chained Thanos Query components.

What you expected to happen: Network traffic to be consistent with previous versions.

How to reproduce it (as minimally and precisely as possible):

1) Setup prometheus with a 0.31.0 thanos sidecar. Issue scales with higher cardinality in Prometheus metrics so you may need to add mock data. 2) Setup a 0.31.0 thanos store gateway. Once again high cardinality and long time range (1 year+) scale the network traffic spike. The only flag we use is --store.enable-index-header-lazy-reader. 3) Setup a 0.31.0 thanos query with --endpoint=dnssrv+_grpc._tcp.thanos-store-gateway.monitoring.svc behind an ALB configured for gRPC traffic. 4) Setup a central 0.31.0 thanos query with --endpoint=$GRPC_HOST to the child thanos query ALB. 5) View network traffic for the child thanos query to the central thanos query. 6) Upgrade all components to 0.32.0. 7) View network traffic again. We've seen it spike 100x on large clusters. When this traffic is going across regions and over the public internet, the cost increase can be substantial. This cost occurs without any active queries and just appears to be caused by the 5s interval refreshes of endpoints from the central thanos.

Turning on --grpc-compression=snappy helped reduce the spike, but it definitely still exists. Removing --store.enable-index-header-lazy-reader did not seem to noticeably reduce the network traffic spike. If the child thanos query or store are rolled back to 0.31.0, the network traffic returns to pre-upgrade levels.

Full logs to relevant components:

No relevant logs. Only screenshots of prometheus.

We will occasionally get a warning on one of the thanos query pods says detecting store that does not support without replica label setting. Falling back to eager retrieval with additional sort. Make sure your storeAPI supports it to speed up your queries but its not frequent enough and doesn't seem to indicate that an increase in network traffice would occur.

Anything else we need to know:

Graph of network transmitted out from the child thanos query instances when upgrading from 0.31.0 to 0.32.0.

image
fpetkovski commented 3 months ago

This could be related to https://github.com/thanos-io/thanos/pull/6329. Do you know many blocks you have approximately in object storage?

ben-nelson-nbcuni commented 3 months ago

That looks like the right addition. The metric thanos_bucket_store_blocks_loaded at its highest is 35,433. That value and others near it are on dev clusters that have had instability in their prometheus / thanos components during large performance tests. I'm not sure if that's contributing to the high block count. Does each interruption in Prometheus service equal a new block?

Is there a way to cache these lookups for older blocks that are unlikely to change? Or can you add a mechanism to turn off this information either on a particular store or query component?

jtb-sre commented 2 months ago

I can reproduce Ben's findings -- I have a development environment on Thanos 0.34.1 and was experiencing the high network traffic noted above. The 100x factor is also true in my environment -- running an intensive query on 0.34.1 generates peak network activity of 40MB/s. I downgraded to 0.31.0 and the same query peaked at about 480 KB/s.

My Thanos queriers have three grpc endpoints (two TLS/grpc ingresses for Thanos sidecars, and a TLS/grpc ingress for a Thanos store service). The development environment I reproduced this on has a small number of blocks in object storage due to limited retention time (230 blocks each of which containing 1-4 chunks, 924 objects in total dating back to 03/12), but relatively high series cardinality (prometheus_tsdb_head_series totals to 500,000 across 2 K8S clusters).

jtb-sre commented 2 months ago

I was able to do a little bit more digging and think I found the cause!

I think the cause is actually #6317 -- as Douglas notes, this change causes the store/sidecar instances to send labels in their response for filtering purposes, which seems a likely cause for the extra traffic we're seeing. Digging through the PR a bit further, I noticed that the newFlushableServer function skips label flushing if --query.replica-label isn't specified. I verified that I could return to the pre-0.32 traffic volume by removing --query.replica-label .

In my case, the development environment is not using HA Prometheus and I do not need to use dedup. It may be worth calling out the network impacts of dedup because they were significant enough to be the cause of some instability in my development clusters. It's also not clear to me why removing --query.replica-label works in light of the changes made in #6706 -- I guess the label check ultimately moved from flushable.go to proxy_heap.go?

@ben-nelson-nbcuni Would you be willing to test whether removing dedup improves matters for your development cluster?

@fpetkovski Am I right in understanding that a feature flag to disable the cuckoo filter would be duplicative, because without it you can't rely on --query.replica-label for deduplication? Also, that it should be sufficient to remove --query.replica-labels from our deployments as long as our pods are uniquely identified including external labels?

Thanks! jtb

ben-nelson-nbcuni commented 2 months ago

We have 2 thanos queries in the chain. One local and one central. Removing --query.replica-label from the local and the central did not have any effect on the traffic volume spike. For this round of testing, I've attached all of our settings.

Local:

  - args:
    - query
    - --log.level=info
    - --log.format=json
    - --grpc-address=0.0.0.0:10901
    - --http-address=0.0.0.0:10902
    - --query.auto-downsampling
    - --endpoint=dnssrv+_grpc._tcp.thanos-store-gateway.monitoring.svc
    - --endpoint=dnssrv+_grpc._tcp.prometheus-operated.monitoring.svc

Central:

  - args:
    - query
    - --log.level=info
    - --log.format=logfmt
    - --grpc-address=0.0.0.0:10901
    - --http-address=0.0.0.0:10902
    - --query.auto-downsampling
    - --grpc-client-tls-secure
    - --grpc-compression=snappy
    - --endpoint=...
ben-nelson-nbcuni commented 2 months ago

Here is a screenshot of prometheus metrics.

  1. At 12:44, I upgraded the local thanos-query to 0.32.4 from 0.28.0 and removed the --query.replica-label.
  2. At 12:50 (once it was clear network was still spiking), I updated the central thanos-query to remove --query.replica-label (the central is always on version 0.32.4).
  3. At 12:59, I downgraded the local thanos-query and re-added --query.replica-label.
  4. As of 13:04, the central thanos still doesn't have --query.replica-label.
image
fpetkovski commented 2 months ago

I suggest we group all blocks by labels here https://github.com/thanos-io/thanos/blob/main/pkg/store/bucket.go#L873-L889 and return one TSDBInfo per stream rather than per block. @MichaHoffmann has noticed a trend of network usage going down with reduction in number of blocks.

MichaHoffmann commented 2 months ago

@ben-nelson-nbcuni are you able to test #7308 by any chance?