uswitch / kiam

Integrate AWS IAM with Kubernetes
Apache License 2.0
1.15k stars 241 forks source link

Document / demo Kops deployment #25

Open pingles opened 6 years ago

pingles commented 6 years ago

Kops applies some taints to masters that changes the default deployment manifests in ./deploy. It'd be nice to have an easier/simpler Kops example that people could run and demo kiam working.

pingles commented 6 years ago

15 has some related background

flypenguin commented 6 years ago

For me that would be how I can change / add the needed sts:AssumeRole permission to the host role, cause the host role is created and managed by kops and can't be changed AFAIK. And according to the docs it seems I need to add this permission to it.

Hints welcome, btw ;)

coryodaniel commented 5 years ago

@flypenguin

Try kops edit cluster then add the entries below:

spec:
  additionalPolicies:
    master: |
      ADDL_POLICY_JSON_HERE
    node: |
      ADDL_POLICY_JSON_HERE

I use kops toolbox template so I dont have to manually inline the policies.

spec:
  additionalPolicies:
    master: |
      {{ include "iam-masters.json" . | indent 6 }}
    node: |
      {{ include "iam-nodes.json" . | indent 6 }}

kops toolbox template --template ./templates/my.tmpl.yaml --snippets ./snippets/

Id have those JSON files in the snippets dir.

roffe commented 5 years ago

I use a Terraform module to create KIAM roles. The masters in Kops by default has access to assume roles, all i had to do was to specify in my KIAM roles that the masters can assume them:

The module

# Support data
data "aws_caller_identity" "current" {}

resource "aws_iam_role" "kiam_role" {
  name = "${var.clusterName}_${var.application_name}"
  path = "/"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "AWS": [
          "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/masters.${var.clusterName}"
        ]
      },
      "Effect": "Allow"
    }
  ]
}
EOF
}

resource "aws_iam_role_policy" "kiam_policy" {
  name = "${var.clusterName}_${var.application_name}_policy"
  role = "${aws_iam_role.kiam_role.id}"

  policy = "${var.policy}"
}
variable "application_name" {
  description = "The kiam role name"
  type        = "string"
}

variable "clusterName" {
  description = "Cluster name"
  type        = "string"
}

variable "policy" {
  description = "The application policy"
  type        = "string"
}

Then to use the module: clusterName should the the name of your Kops created cluster

variable "clusterName" {
  default = "staging.domain.com"
}

provider "aws" {
  region  = "eu-west-1"
  version = "1.14"
}

# -----

module "cluster_autoscaler_kiam" {
  source           = "../modules/kiam"
  application_name = "cluster_autoscaler"
  clusterName      = "${var.clusterName}"

  policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
    ]
}
EOF
}

Then give CA the annotation iam.amazonaws.com/role: staging.domain.com_cluster_autoscaler

Cluster

roffe commented 5 years ago

Oh, the server.yml needs the following tollerations:

      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
jordiclariana commented 5 years ago

Thanks @roffe, your module worked perfectly! I also hit the same problem with tolerations and fixed it like you did.

sylvain-rouquette commented 5 years ago

is there anything else needed to run kiam on a kops cluster?

I'm getting this error on the kiam-server:

transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:443->127.0.0.1:45322: read: connection reset by peer