coreos / tectonic-installer

Install a Kubernetes cluster the CoreOS Tectonic Way: HA, self-hosted, RBAC, etcd Operator, and more
Apache License 2.0
601 stars 266 forks source link

Resizing etcd disks causes etcd node resource to be recreated instead of resizing disk dynamically #567

Open chancez opened 7 years ago

chancez commented 7 years ago

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html indicates that you can resize disks live on nodes. I've found a few things on Terraform's github issues that indicate it should support this as of 0.8.8 (https://github.com/hashicorp/terraform/issues/11931), but I also found https://github.com/hashicorp/terraform/issues/12697 which reports someone experiencing the same issue on 0.8.8.

Here's the output of my terraform plan platforms/aws


-/+ module.etcd.aws_instance.etcd_node
    ami:                                       "ami-102f0875" => "ami-102f0875"
    associate_public_ip_address:               "false" => "<computed>"
    availability_zone:                         "us-east-2a" => "<computed>"
    ebs_block_device.#:                        "0" => "<computed>"
    ephemeral_block_device.#:                  "0" => "<computed>"
    instance_state:                            "running" => "<computed>"
    instance_type:                             "t2.medium" => "t2.medium"
    ipv6_addresses.#:                          "0" => "<computed>"
    key_name:                                  "chance" => "chance"
    network_interface.#:                       "0" => "<computed>"
    network_interface_id:                      "eni-11a83179" => "<computed>"
    placement_group:                           "" => "<computed>"
    primary_network_interface_id:              "eni-11a83179" => "<computed>"
    private_dns:                               "ip-10-0-48-55.us-east-2.compute.internal" => "<computed>"
    private_ip:                                "10.0.48.55" => "<computed>"
    public_dns:                                "" => "<computed>"
    public_ip:                                 "" => "<computed>"
    root_block_device.#:                       "1" => "1"
    root_block_device.0.delete_on_termination: "true" => "true"
    root_block_device.0.iops:                  "100" => "100"
    root_block_device.0.volume_size:           "30" => "45" (forces new resource)
    root_block_device.0.volume_type:           "gp2" => "gp2"
    security_groups.#:                         "0" => "<computed>"
    source_dest_check:                         "true" => "true"
    subnet_id:                                 "subnet-a68119cf" => "subnet-a68119cf"
    tags.%:                                    "5" => "5"
    tags.Name:                                 "chancez-tec-4-etcd-0" => "chancez-tec-4-etcd-0"
    tags.kubernetes.io/cluster/chancez-tec-4:  "owned" => "owned"
    tags.owner:                                "chance zibolski" => "chance zibolski"
    tags.purpose:                              "logging-tests" => "logging-tests"
    tags.team:                                 "tectonic" => "tectonic"
    tenancy:                                   "default" => "<computed>"
    user_data:                                 "413a61017a30e3772419297a63495f156a9702f0" => "413a61017a30e3772419297a63495f156a9702f0"
    vpc_security_group_ids.#:                  "1" => "1"
    vpc_security_group_ids.3052451669:         "sg-f795dd9e" => "sg-f795dd9e"

~ module.etcd.aws_route53_record.etc_a_nodes
    records.#: "" => "<computed>"

Plan: 1 to add, 1 to change, 1 to destroy.
s-urbaniak commented 7 years ago

just for completeness: this is an upstream issue

t-readyroc commented 7 years ago

Getting the same thing on 0.9.4:

root_block_device.#:                       "1" => "1"
root_block_device.0.delete_on_termination: "false" => "false"
root_block_device.0.iops:                  "120" => "<computed>"
root_block_device.0.volume_size:           "40" => "60" (forces new resource)
root_block_device.0.volume_type:           "gp2" => "gp2"