dwmkerr / terraform-aws-openshift

Create infrastructure with Terraform and AWS, install OpenShift. Party!
http://www.dwmkerr.com/get-up-and-running-with-openshift-on-aws
MIT License
170 stars 174 forks source link

Think of a good way to deal w/ undestroyed clusters. #20

Closed jayunit100 closed 7 years ago

jayunit100 commented 7 years ago

I'm working on making an ephemeral infrastructure out of this - and to do that - it seems like some manual deletion needs to happen:


* module.openshift.aws_vpc.openshift: 1 error(s) occurred:

* aws_vpc.openshift: Error creating VPC: VpcLimitExceeded: The maximum number of VPCs has been reached.
        status code: 400, request id: 9eefbbe2-1609-4ae9-a059-4fecdbdf4d6e
* module.openshift.aws_iam_policy.openshift-policy-forward-logs: 1 error(s) occurred:

* aws_iam_policy.openshift-policy-forward-logs: Error creating IAM policy openshift-instance-forward-logs: EntityAlreadyExists: A policy called openshift-instance-forward-logs already exists. Duplicate names are not allowed.
        status code: 409, request id: 70c8c246-b28c-11e7-a4db-f701a4b20913
* module.openshift.aws_iam_role.openshift-instance-role: 1 error(s) occurred:

* aws_iam_role.openshift-instance-role: Error creating IAM Role openshift-instance-role: EntityAlreadyExists: Role with name openshift-instance-role already exists.
        status code: 409, request id: 70c9857f-b28c-11e7-9db0-3532c5b1a3f0
* module.openshift.aws_key_pair.keypair: 1 error(s) occurred:

* aws_key_pair.keypair: Error import KeyPair: InvalidKeyPair.Duplicate: The keypair 'openshift' already exists.
        status code: 400, request id: 41bfcbea-5147-4585-a75b-cbaa8deac27a

Would be nice if there was a concept of AWS namespaces we could use for cleaner global deletion.

sheppduck commented 7 years ago

Can you create a new project (namespace?) every time you spin something up?

jayunit100 commented 7 years ago

that would be a cool solution : 'AWS namespaces' if they existed.

jayunit100 commented 7 years ago

diving deeper:

I can see all the states:

[root@shared-dev terraform-aws-openshift]# terraform state list
module.openshift.aws_ami.amazonlinux
module.openshift.aws_ami.rhel7_2
module.openshift.aws_iam_role.openshift-instance-role
module.openshift.aws_internet_gateway.openshift
module.openshift.aws_route53_zone.internal
module.openshift.aws_route_table.public
module.openshift.aws_route_table_association.public-subnet
module.openshift.aws_security_group.openshift-public-egress
module.openshift.aws_security_group.openshift-public-ingress
module.openshift.aws_security_group.openshift-ssh
module.openshift.aws_security_group.openshift-vpc
module.openshift.aws_subnet.public-subnet
module.openshift.aws_vpc.openshift
module.openshift.template_file.setup-master
module.openshift.template_file.setup-node

But, the resources that are failing here, lilke:

openshift-policy-forward-logs:

Are not actually shown listed in the state list ~ so ~ terraform is getting into an intermediate state WRT

module.openshift.aws_vpc.openshift
aws_vpc.openshift
module.openshift.aws_iam_policy.openshift-policy-forward-logs
aws_iam_policy.openshift-policy-forward-logs
module.openshift.aws_iam_role.openshift-instance-role
module.openshift.aws_key_pair.keypair
aws_key_pair.keypair
sheppduck commented 7 years ago

My other idea is using tags

jayunit100 commented 7 years ago

I guess the only resource that is undestroyed is the instance profile stuff ; the way to deal with this is

1) if terraform exited in a weird state manually delete resrouces that won't get found via destroy

2) the one other resource you need to delete manually is the instance_profile resource

So closing since there is a workaround and it isn't really a bug in this repo ; rather just a issue w terraform