cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
29.97k stars 3.79k forks source link

kvserver: "auto create stats" job should use lower priority for IO #82508

Open tbg opened 2 years ago

tbg commented 2 years ago

https://github.com/cockroachdb/cockroach/pull/81516 adds the admission/follower-overload/presplit-control roachtest. In this roachtest, a three node cluster is set up so that two nodes have all leases for a kv0 workload. At the time of writing, kv0 runs with 4mb/s of goodput (400 rate limit * 10k per write). On AWS (where this run took place), on a default EBS volume with throughput limit 125mb/s and 3000 iops (aggregate read+write), this is right at the limit. As a result, n1 and n2 get into mild IO overload territory.

It was observed that the nodes with leases consistently read more data from disk (green and orange are n1 and n2)

image

read mb/s:

image

Zooming out, we see this pattern:

image

No splits are occurring at the time. However, the bumps match up well with these bumps in raft log:

image

The raft log queue processes replicas at a fixed rate throughout these spikes, so it's unclear if it is now simply contending with read activity or if it is itself the cause of read activity.

Overlaying rate(rocksdb_compacted_bytes_read[$__rate_interval]) onto the bytes read shows that compactions are not the driver of the spiky reads on n1 and n2. Quite the opposite, whenever these spikes occur, compactions can't read as quickly as they would like to.

Jira issue: CRDB-16492

Epic CRDB-37479

nicktrav commented 2 years ago

The spikes in increased read throughput correlates strongly with periods during which the auto create stats job is running. Note that we consume more read bandwidth, and as the device is maxed out, we start "stealing" throughput from writes. This is a look at the same time period that you posted:

Screen Shot 2022-06-07 at 6 56 38 AM

Zooming out further, we see the same thing, though this time we have much more throughput to consume (we bumped up to 250 MB/s). That said, we still see increased reads stealing some write throughput (note the dips in the green line on the write throughput chart at the bottom when the read throughput increases ):

Screen Shot 2022-06-07 at 6 56 59 AM
irfansharif commented 2 years ago

What's left to do in this issue? Downgrade the admission priority level of requests originating from the "auto create stats" job?

nvanbenschoten commented 2 years ago

This seems like a small, targeted change with minimal risk. Should we try to get it in for v22.2?

sumeerbhola commented 10 months ago

We should lower the priority and ensure that it gets subject to elastic CPU AC. We don't currently have a way to share read bandwidth in AC.

aadityasondhi commented 6 months ago

Update here for posterity:

We ran a few internal experiments for this, and the reason for overload is saturating the provisioned disk bandwidth. Until we enable disk bandwidth AC (https://github.com/cockroachdb/cockroach/issues/86857), making changes here will not actually subject the AUTO CREATE STATS (background) job to throttling since we do not throttle reads at the moment.

Full internal discussion is available here.