cockroachdb / cockroach

CockroachDB — the cloud native, distributed SQL database designed for high availability, effortless scale, and control over data placement.
https://www.cockroachlabs.com
Other
30.05k stars 3.8k forks source link

ui: RPC errors chart can be misleading #108585

Open erikgrinaker opened 1 year ago

erikgrinaker commented 1 year ago

The Distributed → RPC Errors chart in the DB Console can be misleading. It only graphs two kinds of errors:

Both of these errors are benign, will be retried, and are expected during normal operation -- but they can still be of some interest during debugging, in some cases.

However, it strikes me as odd that we chart these benign, retryable errors, but we don't chart actual RPC errors that result in errors to the client at all. Furthermore, spikes in these charts (e.g. following a node restart) can cause undue alarm with customers -- see e.g. https://github.com/cockroachlabs/support/issues/2527 where a customer saw spikes during and after an upgrade, which they thought indicated problems with the upgrade, but were entirely normal and expected when doing a rolling restart.

We should do two things here:

  1. This chart shouldn't be named RPC errors, since that isn't entirely accurate. We should downplay these kinds of internal retryable errors that are expected during normal operation and have negligible workload impact in the common case. We can still graph them, but make it clear that these are typically normal and expected, and don't result in client errors.

  2. Chart actual RPC errors. We have a bunch of metrics for different error types under distsender.rpc.err.%s, but unfortunately don't have a counter across all error types -- we should consider graphing a few notable ones, and also the total count (which requires a new metric). Note that these are counted on the DistSender client node, not on the server node. We also have exec.error which counts number of KV batch requests failures on a server node -- some of these errors are benign (e.g. ConditionFailedError), some aren't, but we don't differentiate between error types here.

It would be worthwhile for someone to take a holistic view of which error metrics we have, which metrics we want, and how to communicate them to users in a way that's meaningful and understandable.

Jira issue: CRDB-30531

kevin-v-ngo commented 1 year ago

we should consider graphing a few notable ones, and also the total count (which requires a new metric) It would be worthwhile for someone to take a holistic view of which error metrics we have, which metrics we want, and how to communicate them to users in a way that's meaningful and understandable.

@erikgrinaker do you know which are the notable errors we should count? That would help us make quicker progress (prioritize) on this issue.

erikgrinaker commented 1 year ago

Afraid I don't have time to go over this now, maybe someone on KV can.