vancluever / terraform-provider-acme

Terraform ACME provider
https://registry.terraform.io/providers/vancluever/acme/latest
Mozilla Public License 2.0
225 stars 74 forks source link

If error happen the next plan/apply return no error. Certificate is not updated at AWS #441

Closed EugenKon closed 4 weeks ago

EugenKon commented 4 weeks ago

When I removed the state https://github.com/vancluever/terraform-provider-acme/issues/440https://github.com/vancluever/terraform-provider-acme/issues/440 I ran plan/apply and got the next error:

╷
│ Error: importing ACM Certificate (arn:aws:acm:us-west-2:315400321086:certificate/5a9a682f-fcb1-48e4-8d09-72a68323397a): operation error ACM: ImportCertificate, https response error StatusCode: 400, RequestID: 352e65db-bca9-4c7a-9315-0e44c2f13abb, api error ValidationException: New certificate has a key of EC_prime256v1 which is different from EC_secp384r1 in the current certificate.
│
│   with module.private-cloud.aws_acm_certificate.ssl,
│   on modules/private-cloud/ssl.tf line 47, in resource "aws_acm_certificate" "ssl":
│   47: resource "aws_acm_certificate" "ssl" {
│
╵

When I rerun plan/apply the error gone, but certificate at AWS storage stays unchanged.

Expected result

If error happens then the state should be in wrong state. The next plan/apply should not report that error magically gone.

vancluever commented 4 weeks ago

@EugenKon can you try again with a completely fresh state as mentioned in #440?

EugenKon commented 4 weeks ago

Sure, if I start from the clean state it works. The issue is that my state and actual states are differ and terraform does not report that.

vancluever commented 4 weeks ago

@EugenKon just judging from the commentary in #440, I would not expect the state to work consistently without a full destroy/recreation of the state from the staging infrastructure. If this is not manifesting when the directory URL is not being changed, then things are working as expected.

My recommendation is to, as mentioned in the acme_registration resource docs, make sure you separate staging and production environments using workspaces; note that I'd recommend this over multiple provider instances (as also mentioned in the docs) as I don't think you need to have multiple directories in the same configuration in your instance.

Thanks!