Open DJAlPee opened 1 year ago
This is definitely the more common use case for us. We reuse a lot of our terraform code.
Having something like fail_on_missing
is great for us.
Also possibly related to https://github.com/hashicorp/terraform/issues/33624 where we want to possibly switch off the import
blocks.
This would definitely help by allowing to predefine imports in modules we provide to our devs. We're mostly migrating existing infrastructure, and handling terraform imports outside the CI pipeline is a time waster right now.
I thought I'd be able to workaround this by using terraform import
to determine which import blocks I could generate, but there's no dry-run feature available, per this closed issue: https://github.com/hashicorp/terraform/issues/25713
This patch https://github.com/hashicorp/terraform/commit/350f9b86349531c2687d771b9e26e922cdfd2d19 works, though it assumes any import error should be ignored:
diff --git a/internal/terraform/node_resource_plan_instance.go b/internal/terraform/node_resource_plan_instance.go
index a4bbaa5359..561faa7371 100644
--- a/internal/terraform/node_resource_plan_instance.go
+++ b/internal/terraform/node_resource_plan_instance.go
@@ -165,7 +165,11 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext)
// If the resource is to be imported, we now ask the provider for an Import
// and a Refresh, and save the resulting state to instanceRefreshState.
if importing {
- instanceRefreshState, diags = n.importState(ctx, addr, importId, provider, providerSchema)
+ var importDiags tfdiags.Diagnostics
+ instanceRefreshState, importDiags = n.importState(ctx, addr, importId, provider, providerSchema)
+ if importDiags.HasErrors() {
+ importing = false
+ }
} else {
var readDiags tfdiags.Diagnostics
instanceRefreshState, readDiags = n.readResourceInstanceState(ctx, addr)
Looks like ignoring Cannot import non-existent remote object
errors might not be sufficient, as some providers just error out on some resources when attempting an import (ie. GCP project IAM bindings):
│ Error: Cannot find binding for "project \"gcp-project\"" with role "roles/...", member "serviceAccount:...@gcp-project.iam.gserviceaccount.com", and condition title ""
We have a very similar use-case: our AWS account configuration is managed using Terraform. import
blocks provide a good way to handle our existing accounts but that won't work for a new account unless you run the configuration as of an old config version first and then let it upgrade, or do something kludgy like deleting the imports first. I think that fail_on_missing
proposal would be great but I'm wondering whether the name should be something more along the lines of create_when_missing
to focus on the desired outcome.
We have a very similar use-case: our AWS account configuration is managed using Terraform.
import
blocks provide a good way to handle our existing accounts but that won't work for a new account unless you run the configuration as of an old config version first and then let it upgrade, or do something kludgy like deleting the imports first. I think thatfail_on_missing
proposal would be great but I'm wondering whether the name should be something more along the lines ofcreate_when_missing
to focus on the desired outcome.
I disagree the create_when_missing
is needed. Remember that you're attaching the import block to a resource. the import block needs to just be ignored if the fail_on_missing
flag is false, allowing the terraform script to gracefully continue and plan to Create a new resource instead of failing the entire script.
Something like this would be hugely helpful. We are forced to use local backends for our terraform and have had cases where our state file was lost. Being able to have something equivalent to "import if exists else create" would be amazing. Given @antoinedeschenes comment about only catching missing resources error being insufficient, I would suggest something along the lines of allow_failure
or ignore_error
. Either that or ignore the fact that not all providers return proper errors and encourage them to align with standards.
I disagree the
create_when_missing
is needed. Remember that you're attaching the import block to a resource. the import block needs to just be ignored if thefail_on_missing
flag is false, allowing the terraform script to gracefully continue and plan to Create a new resource instead of failing the entire script.
The property name doesn't have to decribe the behavior in the code.
Something like create_when_missing
feels more "natural" regarding the "Developer Experience". In my opinion (!), it might make sense, to have this as default behavior after some time... But that would be a breaking change 😄
[...] I would suggest something along the lines of
allow_failure
orignore_error
. Either that or ignore the fact that not all providers return proper errors and encourage them to align with standards.
This sounds too generic... What about errors e.g. regarding permissions?
Would be great, if there is already something in place, that is able to search the given ID using the providers implementation. Catching an import error (and deciding to throw it or not) feels not right to me and could lead into unexpected behaviors for some providers. The workflow should look like:
create_when_missing = true
-> Search, if given ID exist (new behavior)create_when_missing = false
-> Import (current behavior)This would cause one additional API call during the import, but only once!
@jbardin are you able to share a timeframe when this might be included?
And in the meantime, are there any recommended patterns for workarounds?
I'm also hitting the issue with cloud watch log groups that are automatically created by other AWS services. I need to:
@kevinkuszyk, No there is no timeline yet, but it does not sound like it would fit your use case. The import action and planning decisions all need to happen during the plan, so if there are log groups created implicitly by other resources, you're still going to end up creating conflicting log groups during apply. What you're describing is a much more complex feature which requires support from providers and a new protocol to handle. It's something we're aware of, but not directly related to import.
This issue is critical to what I'm trying to build. I have an immutable kubernetes secret which I want to import if it exists, but if not I wish to have it created. I'm relying on the ignore_changes
lifecycle option to maintain the immutability of the secret which works great for my use case, however if the secret doesn't exist I get the Cannot import non-existent remote object
error.
I also would love to have a create_if_not_exists option on the import block, or something similar. We manage some resource types that are persisted on external DBs, and therefore survive when we rebuild our environments. However, we often create them with TF directly: having to deploy the config twice, first with the import block commented-out, and then the final one, is annoying.
Now that for_each
is part of import
it is easier than ever to have dynamic import blocks.
A fail_on_missing = false
flag would allow us to quickly migrate larger sets of infrastructure without manual intervention.
Bump. This would be exceptionally helpful for our Org where we migrate tens of thousands of cloud resources and are continuing to deploy infrastructure.
Side note: Our work around is to place all of our import blocks in an import.tf
configuration file and temp rename the file if fails. Simple implementation in bash below:
#!/bin/bash
# constants
blue='\e[0;34m'
red='\e[0;31m'
nocolor='\e[0m'
indent=" "
##### OPTARG PARSING #####
# terminal help function, exits script after execution
helpFunction()
{
echo ""
echo "Usage: $0 -c ./terraform/configurations/admin -e dev"
echo -e "\t-c Path to the configuration to initialize"
echo -e "\t-e Name of the configuration environment. Must contain only alpha characters."
echo ""
exit 1
}
# acquiring opts, prints helpFunction in case parameter is non-existent
while getopts "c:e:" opt
do
case "$opt" in
c ) configurationPath="$OPTARG" ;;
e ) environment="$OPTARG" ;;
? ) helpFunction ;;
esac
done
# Print helpFunction in case parameters are empty
if [ -z "$configurationPath" ] || [ -z "$environment" ]
then
echo -e "${indent}${red}Some or all of the parameters are empty${nocolor}";
helpFunction
fi
importFile=import.tf
importFileTmpCache=./$configurationPath/import.cache
stderrLocation=/dev/null
stdoutLocation=/dev/tfplan.stdout
{
# execute plan with all tf files including imports
terraform -chdir=$configurationPath plan -var-file=./env/$environment.tfvars 2>$stderrLocation 1>$stdoutLocation
} || {
# execute without imports if fails - rename import file temporarily to skip imports
# workaround for: https://github.com/hashicorp/terraform/issues/33633
echo -e "\t\tAttempting Plan without Import"
mv ./$configurationPath/$importFile $importFileTmpCache
terraform -chdir=$configurationPath plan -var-file=./env/$environment.tfvars 2>$stderrLocation 1>$stdoutLocation
mv $importFileTmpCache ./$configurationPath/$importFile
}
# output last plan stdout and stderr
cat $stdoutLocation
cat $stderrLocation
Now that
for_each
is part ofimport
it is easier than ever to have dynamic import blocks. Afail_on_missing = false
flag would allow us to quickly migrate larger sets of infrastructure without manual intervention.
This is exactly what we are running into. I was so excited about for_each in import, but quickly running into issues where I won't be able to use the same code for import and create.
Please add this feature. Thanks.
Allowing create = true
option would be very helpful for all sort of advanced tf usage in production flows.
Take for example a scenario where a namespace that was auto generated by flux helm install is now moving to be managed by terraform state. Rollout of this case for (n) environments would require scripts to align state where as create = true
would allow keeping tf code robust for both existing environemtns and new deployments.
@mlucic I see that you already have suggested a fix with MR https://github.com/hashicorp/terraform/pull/34647
Do you still plan on singing the CLA so that it gets a chance to be merged?
That would be so helpful!
@mlucic I see that you already have suggested a fix with MR #34647
Do you still plan on singing the CLA so that it gets a chance to be merged?
Looks like it was closed but not merged?
I don't understand how create=true is expected to work differently than fail_on_missing=false, since "not failing" means the target resource of the import would presumably create as usual. So those arguments should be equivalent. But either way, what's the blocker here? Feels like "optionally pretend like the directive wasn't there if import failed" isn't a year's worth of work, and the ability to import stuff if it exists and create if it doesn't seems pretty valuable in a lot of cases.
Terraform Version
Use Cases
In our CI/CD system, I want to import resources, that eventually already had been created. If they do not exist, terraform could safely create them using the existing configuration.
My concrete example: I have some (AWS) Lambda functions, that had been created by terraform. When being executed for the first time, the functions will create a LogGroup with the function name in CloudWatch. Unfortunately, the default config for these LogGroups doesn't fit our needs (e.g. no log retention being set). When I add the LogGroup to the terraform configuration, applying will fail in most (but not all!) cases, because it tries to create the LogGroup with the existing name.
Attempted Solutions
In similar situations, we added some commands before doing the "apply" and imported the resources using the CLI import command or just deleted the resource. The new
import
block would be a game changer for us...Thanks to CDKTF, as a workaround we can make the "import" block optional and check the existence of the resource using AWS API.
Proposal
I see two possible ways to tackle this:
Option 2 feels best for me, because the behavior can be configured individually for each resource/import. The config could look like:
More "positiv" sounding proposal by @acdha:
References
No response