Open azrdev opened 1 year ago
I'm trying to implement a fix for this one:
SetId("")
:
Resource.Read
state "Managed resources can signal to Terraform that the managed resource instance no longer exists and potentially should be recreated by calling the SetId method with an empty string ("") parameter and without returning an error."d.Set("lease_expire_at" ...)
the code if time.Time(deployment.LeaseExpireAt).Before(time.Now()) { d.SetId("") }
terraform plan
ned to be created but there's no reference that the old deployment is removed. this solves the problem , but IMHO could dangerously hide a situation one would want to recognizeas proposed in the stackoverflow answer, I implemented CustomizeDiff
and call ForceNew
conditionally: in vra/resource_deployment.go#resourceDeployment() I add
CustomizeDiff: customdiff.ForceNewIf(
"lease_expire_at",
func(ctx context.Context, d *schema.ResourceDiff, meta interface{}) bool {
expiration, err := strfmt.ParseDateTime(d.Get("lease_expire_at").(string))
log.Printf("checking expiration: %s %s vs %s", expiration, err, time.Now())
return err == nil && time.Time(expiration).Before(time.Now())
}),
but that results in an empty plan on an expired deployment. TF_LOG=1 reveals provider.terraform-provider-vra: unable to require attribute replacement: error="ForceNew: No changes for lease_expire_at" tf_attribute_path=lease_expire_at tf_req_id=<uuid> @caller=/home/x/go/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/v2@v2.25.0/helper/customdiff/force_new.go:29 @module=sdk.helper_schema tf_provider_addr=provider tf_resource_type=vra_deployment tf_rpc=PlanResourceChange
ForceNewIf requiring an attribute works as documented
ForceNew = true
: just adding a boolean attribute and setting it with d.Set("is_expired", time.Time(deployment.LeaseExpireAt).Before(time.Now()))
doesn't suffice, because ForceNew triggers on changes of that attribute, so if at planning time it's already expired it remains (false negative), and if a resource is resurrected/un-expired, ForceNew will recreate it (false positive)plantimestamp()
, and ForceNew is set to true, when expired_at < plantimestamp
. This works as expected, but when the attribute is used, it shows a diff on every plan: see #514
I tried to hide/suppress that diff with the following addition to the PR (in the CustomizeDiff func, just above the Before()
comparison), but that will suppress the recreation also:
old, new := d.GetChange("recreate_if_expired_at")
log.Printf("Hiding recreate_if_expired_at from diff: overwriting new %#v with old %#v", new, old)
if err = d.SetNew("recreate_if_expired_at", old); err != nil {
log.Printf("[DEBUG] Failed to hide recreate_if_expired_at from diff:", err)
}
So given what I've tried above, I see two solutions:
implement the third bulletpoint d.SetId("")
(maybe behind a boolean user-input flag):
an expired vra_deployment already in tfstate is planned to be created but there's no reference that the old deployment is removed.
plan output:
attribute (TODO: implement flag) | recreate_if_expired = false |
recreate_if_expired = true |
---|---|---|
is expired | nothing | create (expired deployment is ignored) |
is not expired | nothing | nothing |
merge my PR #514 which will yield these plans:
attribute | recreate_if_expired_at = null |
recreate_if_expired_at = plantimestamp() |
---|---|---|
is expired | nothing | destroy+create |
is not expired | nothing | diff on the attribute in every plan |
Code of Conduct
This project has a Code of Conduct that all participants are expected to understand and follow:
Description
Our vRA imposes a lifetime on all deployments, i.e. they expire after 30 days. Afterwards the deployed VMs are powered off and not useable anymore until being deleted some time after. For terraform, however, they still exist "in good state" -- attributes like
last_request
andlease_expire_at
tell it has been expired and allow failing such a VM early using a postcondition, but I'm looking for a way to re-create such expired deployments. Context is a scheduled CI pipeline which runs terraform followed by config management to ensure certain infrastructure (VM + application) is always present. Currently it fails in config management due to the expired VM being powered off, requiring manual destruction of the VM to recreate the setup.This is a follow-up to #436 since even with it solved in #462 (thank you!) a workaround seems not to be possible (see https://github.com/hashicorp/terraform/issues/31702).
Possible solutions
Adding a parameter to the resource (or provider?)
recreate_expired: bool
is probably the best way to goDescribe alternatives you've considered
postcondition
withself.last_request.action_id != "Deployment.Expire"
-- works, but only fails plan/apply and requires manual action to trigger recreationreplace_triggered_by
: cannot be applied to self, see linked terraform issue abovereplace_triggered_by
→data "vra_resource" { (same ID as the resource) }
(full example see linked tf issue above): data source fails when resource not yet existsReferences
Community Note