mozilla-services / services-engineering

Services engineering core repo - Used for issues/docs/etc that don't obviously belong in another repo.
2 stars 1 forks source link

syncstorage-rs latency spikes #61

Closed erkolson closed 3 years ago

erkolson commented 3 years ago

Spanner latency spikes have been eliminated by dropping the BatchExpiry index but the application is still occasionally experiencing latency spikes that break latency SLA targets.

One example from overnight 7/18-7/19, spanner has normal performance, syncstorage has outlier latency:

Screen Shot 2020-07-20 at 10 05 46 AM

This incident caused 2 of the 9 running pods to max out active connections and get stuck. When the pods get in this state, manual intervention is required (to kill them).

The result, increased latency for request handling until the "stuck" pods are deleted:

Screen Shot 2020-07-20 at 10 25 59 AM Screen Shot 2020-07-20 at 10 27 33 AM

I'm seeing a number of errors like these at the time but can not tell if they are the cause of the problem:

6-ALREADY_EXISTS Row [<fxa_uid>,<fxa_kid>,3,<another-id>,<yet-another-id>] in table batch_bsos already exists, status: 500 }"}}

(I removed the identifying information)

A database error occurred: RpcFailure: 8-RESOURCE_EXHAUSTED AnyAggregator ran out of memory during aggregation., status: 500 }"}}
pjenvey commented 3 years ago

I haven't seen "stuck" pods during 0.5.x load tests on stage but I see something somewhat similar.

Part of the challenge here is stage cluster's size significantly scales up/down for load testing, e.g. from a size of 1-2 nodes when idle -> 5-6 under load, then back down when it's finished.

However the canary node tends to stick around throughout, and a couple of different load tests against 0.5.x show the following:

Screen Shot 2020-08-05 at 6 20 08 PM Screen Shot 2020-08-05 at 6 20 11 PM

Canary isn't "stuck" here but taking seconds for do nothing health checks.

Zooming out a bit you can see the pattern reflected in the Uptime Check:

Screen Shot 2020-08-05 at 6 51 16 PM Screen Shot 2020-08-05 at 6 48 51 PM Screen Shot 2020-08-05 at 6 44 12 PM

Especially easy to see the 4 days of lengthy health checks (23 - 27) when the cluster was mostly idle in between a few load tests.

tublitzed commented 3 years ago

Thank you Phil, for the details here!

@erkolson:

  1. In terms of the referenced latency SLA targets: what are they? :) Ie, are you referencing that doc I made awhile back with rough targets (that I need to revisit), or something else?
  2. Are you still seeing this issue in production? (ie, it's high priority for us now, want to confirm it's still marked accurately)
  3. Do you have any suggestions on how we might continue to debug here?