hjacobs / kube-aws-autoscaler

Simple, elastic Kubernetes cluster autoscaler for AWS Auto Scaling Groups
GNU General Public License v3.0
94 stars 21 forks source link
autoscaling aws kubernetes kubernetes-cluster

================================= Kubernetes AWS Cluster Autoscaler

.. image:: https://travis-ci.org/hjacobs/kube-aws-autoscaler.svg?branch=master :target: https://travis-ci.org/hjacobs/kube-aws-autoscaler :alt: Travis CI Build Status

.. image:: https://coveralls.io/repos/github/hjacobs/kube-aws-autoscaler/badge.svg?branch=master;_=1 :target: https://coveralls.io/github/hjacobs/kube-aws-autoscaler?branch=master :alt: Code Coverage

THIS PROJECT IS NO LONGER MAINTAINED, PLEASE USE THE OFFICIAL CLUSTER AUTOSCALER <https://github.com/kubernetes/autoscaler>_ INSTEAD

Simple cluster autoscaler for AWS Auto Scaling Groups which sets the DesiredCapacity of one or more ASGs to the calculated number of nodes.

Goals:

This autoscaler was initially created as a proof of concept and born out of frustration with the "official" cluster-autoscaler_:

Disclaimer

Use at your own risk! This autoscaler was only tested with Kubernetes versions 1.5.2 to 1.7.7. There is no guarantee that it works in previous Kubernetes versions.

Is it production ready? Yes, the kube-aws-autoscaler is running in production at Zalando for months, see https://github.com/zalando-incubator/kubernetes-on-aws for more information and deployment configuration.

How it works

The autoscaler consists of a simple main loop which calls the autoscale function every 60 seconds (configurable via the --interval option). The main loop keeps no state (like history), all input for the autoscale function comes from either static configuration or the Kubernetes API server. The autoscale function performs the following task:

The whole process relies on having properly configured resource requests for all pods.

Usage

Create the necessary IAM role (to be used by kube2iam if you have it deployed):

Deploy the autoscaler to your running cluster:

.. code-block:: bash

$ kubectl apply -f deploy/deployment.yaml

See below for optional configuration parameters.

Configuration

The following command line options are supported:

--buffer-cpu-percentage Extra CPU requests % to add to calculation, defaults to 10%. --buffer-memory-percentage Extra memory requests % to add to calculation, defaults to 10%. --buffer-pods-percentage Extra pods requests % to add to calculation, defaults to 10%. --buffer-cpu-fixed Extra CPU requests to add to calculation, defaults to 200m. --buffer-memory-fixed Extra memory requests to add to calculation, defaults to 200Mi. --buffer-pods-fixed Extra number of pods to overprovision for, defaults to 10. --buffer-spare-nodes Number of extra "spare" nodes to provision per ASG/AZ, defaults to 1. --include-master-nodes Do not ignore auto scaling group with master nodes. --interval Time to sleep between runs in seconds, defaults to 60 seconds. --once Only run once and exit (useful for debugging). --scale-down-step-fixed Scale down step in terms of node count, defaults to 1. --scale-down-step-percentage Scale down step in terms of node percentage (1.0 is 100%), defaults to 0%

.. _"official" cluster-autoscaler: https://github.com/kubernetes/autoscaler .. _allocatable capacity: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/node-allocatable.md