cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
29.9k stars 3.78k forks source link

kv: clients without retries backoffs can cause metastable failure #123304

Open andrewbaptist opened 4 months ago

andrewbaptist commented 4 months ago

Describe the problem

In situations where clients set low SQL timeouts and retry without backoff, we can enter a state of metastable failure where the only way out is to completely stop the workload and then gradually restart it.

To Reproduce Use a modified version of the workload tool which will retry errors when --tolerate-errors is set rather than just ignoring them. Note the different binary that is put on node 13 which has this behavior.

Create a 13 node cluster (12 nodes plus workload)

roachprod create -n 13 --gce-machine-type n2-standard-16 $CLUSTER
roachprod stage $CLUSTER:1-12 release v23.1.17
roachprod put $CLUSTER:13 artifacts/cockroach
roachprod start $CLUSTER:1-12
roachprod ssh $CLUSTER:1 "./cockroach workload init kv $(roachprod pgurl $CLUSTER:1) --splits 1000"

Set up the SQL user and permissions correctly

USE kv;
CREATE USER testuser;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA PUBLIC TO testuser;
ALTER USER testuser SET statement_timeout='250ms';
ALTER RANGE default CONFIGURE ZONE USING gc.ttlseconds = 600;

2x - run this command - note that the cluster runs at ~15% CPU usage (ideally we could run this once, but sometimes it fails to start).

roachprod ssh $CLUSTER:13 "./cockroach-short workload run kv $(roachprod pgurl $CLUSTER:1-6 | sed 's/root/testuser/g') --concurrency=50000 --max-rate=40000 --retry-errors=0ns --ramp=10s"

Let it run for ~1 minutes to generate some data. Add a write heavy workload to a different DB for a few seconds to create LSM inversion.

roachprod ssh $CLUSTER:13 "./cockroach workload run kv $(roachprod pgurl $CLUSTER:1-12) --tolerate-errors --concurrency=1000 --max-block-bytes=1000000 --db=kv2 --drop --init --splits=100 --max-ops=5000"

Notice that the system enters a failure state where the CPU is pegged and it it only processing a fraction of the number of QPS it was before.

Stop the workload jobs, wait 10 seconds and restart it. Notice that now the cluster is stable again and handling the workload without issue.

Expected behavior Ideally there would be no errors that occur during this test. Given that older versions of the software hit errors due to overload during index creation, the errors are not surprising, but the non-recovery of the system is.

Additional data / screenshots Timeline

image

Environment: CRDB 23.1.17, see commands above for exact configuration.

Additional context We have seen customers with similar configurations and setups that have hit this issue.

Jira issue: CRDB-38280

andrewbaptist commented 4 months ago

Running on 24.1-beta3 - the system experiences similar behavior. However it is harder to tip it into the unstable regime. Letting the system fill for ~10 minutes first will do it.

lyang24 commented 4 months ago

nit 'kv: clients without retries backoffs can cause metastable failure'

andrewbaptist commented 4 months ago

On v24.1-beta3 with admission.kv.enabled=false or admission.kv.bulk_only.enabled=true the workload will not be unstable regime.

Setting server.max_open_transactions_per_gateway = 100 also prevents it from becoming unstable.

sumeerbhola commented 4 months ago

I am inclined to close this as a combination of (1) a configuration problem, (2) known issues tracked elsewhere.

andrewbaptist commented 4 months ago

I don't think we should close this issue until the default "out of the box" configuration doesn't enter the unstable mode. I also agree that we should not say this is addressed by disabling AC. I think we should consider making server.max_open_transactions_per_gateway a default configured setting, but ideally this could be tied into AC in the future as well. I also agree this is the approach most other production systems take.

We don't need to schedule this for an upcoming release, but we have seen this exact behavior at a customer and will likely see future customers who submit similar issues in the future. Having this open allows us to attach other customer cases to it and decide if fixing this is something we want to do.

It is also worth automating this test as a failing roachtest to ensure that if we do come up with a solution we can correctly address it in the future.