neondatabase / autoscaling

Postgres vertical autoscaling in k8s
Apache License 2.0
152 stars 21 forks source link

Remove scheduler plugin "buffer" resources #840

Open sharnoff opened 6 months ago

sharnoff commented 6 months ago

Problem description / Motivation

In order to prevent accidental overcommitting, on startup the scheduler plugin has a measure of "uncertainty" for each VM's usage that is resolved only when the autoscaler-agent makes a request to the scheduler plugin to inform it of its intentions.

This has two issues:

  1. Forced to choose between "unavailable" and "inaccurate", we have chosen "unavailable". In practice this is much worse / higher risk than inaccuracies.
  2. There's a short (~5s) period of unavailability immediately after startup. When we add replicas & leader election for the scheduler, this unavailability could be frequent enough to cause liveness issues for the scheduler (i.e. it may be unable to schedule for extended periods of time)

Feature idea(s) / DoD

Scheduler plugin scheduling uncertainty should not cause unavailability

Implementation ideas

Instead of keeping "buffer" resources, we should just entirely remove it, and be willing to make inaccurate scheduling decisions. Worst-case scenario is that we accidentally overcommit by a little bit — in practice, real resource usage in our clusters is much lower than reserved resources, so we have wiggle room.

Omrigan commented 5 months ago

Do we have an estimation how long does state rebuilding progresses? E. g. if it takes 20s to build 80% of state, maybe we should delay scheduling until we have reports from 80% of nodes, and then accept the "20% of inaccuracy"?

sharnoff commented 4 months ago

It's an interesting idea... I think in theory we could do this, but in practice:

For this issue, I'm currently thinking the easiest way forward is to move the autoscaler-agent <-> scheduler plugin protocol into annotations on the VMs, which would:

  1. Guarantee that all information is available just by reading the cluster state (so: no waiting for comms — basically, if we do it right, "buffer" would always be zero because everything is known)
  2. Make replicas / leader election simple, because the autoscaler-agents wouldn't be communicating with any particular scheduler instance (see #841)

What do you think?

Omrigan commented 4 months ago

For this issue, I'm currently thinking the easiest way forward is to move the autoscaler-agent <-> scheduler plugin protocol into annotations on the VMs, which would:

1. Guarantee that all information is available just by reading the cluster state (so: no waiting for comms — basically, if we do it right, "buffer" would always be zero because everything is known)

2. Make replicas / leader election simple, because the autoscaler-agents wouldn't be communicating with any particular scheduler instance (see [Scheduler leader election #841](https://github.com/neondatabase/autoscaling/issues/841))

What do you think?

Sounds good!

sharnoff commented 4 months ago

RFC for the stuff above: https://www.notion.so/neondatabase/e9d3253836124f9f98e681f89c146a19