Open andrewbaptist opened 4 months ago
Running on 24.1-beta3 - the system experiences similar behavior. However it is harder to tip it into the unstable regime. Letting the system fill for ~10 minutes first will do it.
nit 'kv: clients without retries backoffs can cause metastable failure'
On v24.1-beta3 with admission.kv.enabled=false
or admission.kv.bulk_only.enabled=true
the workload will not be unstable regime.
Setting server.max_open_transactions_per_gateway = 100
also prevents it from becoming unstable.
Disabling parts of AC: Setting admission.kv.enabled=false
or admission.kv.bulk_only.enabled=true
is not a desirable tradeoff since it will avoid necessary AC queueing but further invert the store. With https://github.com/cockroachdb/cockroach/issues/123509, there will be fairness across leaseholder and follower writes, and we should embrace that.
Throughput decrease: We have a known problem that without server.max_open_transactions_per_gateway = 100
, the throughput drops to about 50% -- this is documented in https://github.com/cockroachdb/cockroach/issues/91536#issuecomment-2079947960. We need some improvements in txn hearbeating to improve this 50% number. I think that should be tracked elsewhere.
Latency increase and massive throughput decrease: if the number of open txns grows without bound, which is what this scenario with timeouts and immediate retry causes, a system with FIFO queueing will eventually have latency close to the timeout (or with no timeout, unbounded latency, since eventually very old txns will get their turn to execute), and effective throughput close to 0 (since every txn will start executing close to its timeout and reach the timeout before it completes). The default queueing order in AC for txns from the same tenant and same qos is the txn start time, so FIFO. We have implemented epoch-LIFO, but I don't think it is "ready" as a solution. The practical systems I have seen do FIFO scheduling and manage to get good throughput with a combination of (a) limiting admission into the system (rate limit or concurrency limit, like server.max_open_transactions_per_gateway
) -- the queuing delay outside the system does not count towards the system's latency, (b) having large enough timeout that admitted work can complete before the timeout. For example, if desired latency is < 100ms, the timeout may be 10s, and (a) set to a high enough value that admitted txns complete within 2s (assuming we can live with 2s under periods of high load until provisioning is changed). We have knobs to control both (a) and (b), so unclear what more to do here.
I am inclined to close this as a combination of (1) a configuration problem, (2) known issues tracked elsewhere.
I don't think we should close this issue until the default "out of the box" configuration doesn't enter the unstable mode. I also agree that we should not say this is addressed by disabling AC. I think we should consider making server.max_open_transactions_per_gateway
a default configured setting, but ideally this could be tied into AC in the future as well. I also agree this is the approach most other production systems take.
We don't need to schedule this for an upcoming release, but we have seen this exact behavior at a customer and will likely see future customers who submit similar issues in the future. Having this open allows us to attach other customer cases to it and decide if fixing this is something we want to do.
It is also worth automating this test as a failing roachtest to ensure that if we do come up with a solution we can correctly address it in the future.
Describe the problem
In situations where clients set low SQL timeouts and retry without backoff, we can enter a state of metastable failure where the only way out is to completely stop the workload and then gradually restart it.
To Reproduce Use a modified version of the workload tool which will retry errors when
--tolerate-errors
is set rather than just ignoring them. Note the different binary that is put on node 13 which has this behavior.Create a 13 node cluster (12 nodes plus workload)
Set up the SQL user and permissions correctly
2x - run this command - note that the cluster runs at ~15% CPU usage (ideally we could run this once, but sometimes it fails to start).
Let it run for ~1 minutes to generate some data. Add a write heavy workload to a different DB for a few seconds to create LSM inversion.
Notice that the system enters a failure state where the CPU is pegged and it it only processing a fraction of the number of QPS it was before.
Stop the workload jobs, wait 10 seconds and restart it. Notice that now the cluster is stable again and handling the workload without issue.
Expected behavior Ideally there would be no errors that occur during this test. Given that older versions of the software hit errors due to overload during index creation, the errors are not surprising, but the non-recovery of the system is.
Additional data / screenshots Timeline
Environment: CRDB 23.1.17, see commands above for exact configuration.
Additional context We have seen customers with similar configurations and setups that have hit this issue.
Jira issue: CRDB-38280