Open sharnoff opened 8 months ago
Do we have an estimation how long does state rebuilding progresses? E. g. if it takes 20s to build 80% of state, maybe we should delay scheduling until we have reports from 80% of nodes, and then accept the "20% of inaccuracy"?
It's an interesting idea... I think in theory we could do this, but in practice:
For this issue, I'm currently thinking the easiest way forward is to move the autoscaler-agent <-> scheduler plugin protocol into annotations on the VMs, which would:
What do you think?
For this issue, I'm currently thinking the easiest way forward is to move the autoscaler-agent <-> scheduler plugin protocol into annotations on the VMs, which would:
1. Guarantee that all information is available just by reading the cluster state (so: no waiting for comms — basically, if we do it right, "buffer" would always be zero because everything is known) 2. Make replicas / leader election simple, because the autoscaler-agents wouldn't be communicating with any particular scheduler instance (see [Scheduler leader election #841](https://github.com/neondatabase/autoscaling/issues/841))
What do you think?
Sounds good!
RFC for the stuff above: https://www.notion.so/neondatabase/e9d3253836124f9f98e681f89c146a19
Problem description / Motivation
In order to prevent accidental overcommitting, on startup the scheduler plugin has a measure of "uncertainty" for each VM's usage that is resolved only when the autoscaler-agent makes a request to the scheduler plugin to inform it of its intentions.
This has two issues:
Feature idea(s) / DoD
Scheduler plugin scheduling uncertainty should not cause unavailability
Implementation ideas
Instead of keeping "buffer" resources, we should just entirely remove it, and be willing to make inaccurate scheduling decisions. Worst-case scenario is that we accidentally overcommit by a little bit — in practice, real resource usage in our clusters is much lower than reserved resources, so we have wiggle room.