Open MartinEmrich opened 1 year ago
Hi @MartinEmrich! Thanks for reporting this.
Unfortunately I think we have some conflicting requirements here. We made this change specifically because it is apparently insufficient to work with just the target of the import in isolation, as discussed in #27934. Others expect Terraform to evaluate the full configuration as if planning so that they can e.g. have a provider configuration which relies on the result of a data resource.
I think this leaves us at an empasse where we have to choose between one of two "buggy" behaviors, as long as terraform import
remains a separate command.
My sense is that the only complete fix would be for importing to be an additional planning option during a normal plan and apply, so that Terraform isn't forced into this weird edge-case state where it both needs to evaluate everything and evaluate nothing at the same time. The refactoring we did here was intended as a first step in that direction -- with deprecation of the separate terraform import
command being the ultimate end goal -- but there's still more design and implementation work to do before importing can be just another modifier to a normal plan, in similar vein to the existing -replace
option which tells Terraform to modify its treatment of a particular resource instance while still doing a normal plan elsewhere.
In the meantime though I think it's required to treat terraform import
as if it were a terraform apply
command with an extra modifier. In particular, that means setting -var
or -var-file
to set all input variables that the configuration relies on.
Import doesn't support the -target
planning option to make Terraform only consider a subset of objects, although since importing is now internally performing a plan and apply mostly as normal perhaps we could add support for -target
in the interim and rely on the existing mechanisms for handling that. (I expect we'd want an additional rule that you can't use -target
to exclude the same thing you're trying to import into, but hopefully that's not a hard situation to detect and report on. I've not actually checked the code to confirm, though.)
Hi @apparentlymart
I don't know if I understand correctly.... #27934 seems to relate to provider configuration, but I have no issues regarding that. (Could it be that there the issue is that a terraform provider itself would only be useable if an actual resource is created and ready to provide a piece of data for the very provider itself?)
For me, the provider configurations are pre-defined, and terraform plan
works fine even with an empty state. (terraform apply
would also work, but would of course create a second Route53 zone resource if I do import the existing one beforehand)
You say "...as long as terraform import
remains a separate command."... Separate to what? Is there another, alternative command I overlooked?
Also note that I actually run terraform through a wrapper script setting all the TF_VAR_
environment variables, so regardless of plan
, import
or apply
, the variables are always set the same way.
I hope I can find time to strip down the project to a bare minimum case example to reproduce it in the next days...
I built a stripped-down test example: https://github.com/MartinEmrich/tf-import-test-example
For anyone, like me, who encountered this problem and doesn't have months to wait for the fix, you can just downgrade to 1.2.9 and run your terraform import
with that version.
Incidentally, that's a good diagnostic to see if this bug is the problem that you're encountering.
@apparentlymart well that commitment to semver didn't last very long 👎 No deprecation warning, no breaking change version, nothing to tell us you were removing a well-known, oft-used pattern.
When refactoring terraform without affecting actual resources, we need restored the ability to import resources to specific addresses without a full dependency calculation blocking the import.
Please revert the change, or add a flag to the import command "-import-only" or something. This legacy behaviour depended on by Terraform users. Downgrading won't always be an option, and nobody wants to be hand-editing terraform states..
It seems the only time that import needs to read data sources or even look at the configuration at all is for provider-specific configuration. Because I guess during import, terraform uses the provider to lookup more info regarding the resource by using the provider to call their API, and so that specific provider needs to be configured enough to allow terraform to read info about the resource given its ID. I think terraform should lazily try to read provider config. So only if it needs to read a data source in order to use a provider, it will. And it shouldn't attempt to load config for all providers, it should just infer the provider using the address (eg google_storagebucket.default, terraform already infers the provider by looking at the `google` prefix). If there is a kubernetes provider or something in the configuration, terraform shouldn't try to make sure it's configured. I think this would solve the "conflicting requirements" issue, without breaking either workflow.
While I agree with the argument, just note that at least my issue does not relate to provider configuration (which is purely static and fully available here).
I am uncertain whether I am reporting the same or a related bug...
With this Terraform configuration:
variable "public_subnet_cidr" {
type = map(string)
default = {
"eu-west-1a" = "10.0.3.0/24",
"eu-west-1b" = "10.0.4.0/24",
"eu-west-1c" = "10.0.5.0/24",
}
}
resource "aws_subnet" "public" {
for_each = var.public_subnet_cidr
vpc_id = "placeholder"
availability_zone = each.key
cidr_block = each.value
}
resource "aws_eip" "nat" {
for_each = aws_subnet.public
}
resource "aws_vpc" "importtest" {}
executing:
terraform import aws_vpc.importtest foo
produces (besides the "Cannot import non-existent remote object" error since clearly there is no VPC named foo
):
│ Error: Invalid for_each argument
│
│ on /home/maxb/code/tf-sandbox/applytime/main.tf line 19, in resource "aws_eip" "nat":
│ 19: for_each = aws_subnet.public
│ ├────────────────
│ │ aws_subnet.public will be known only after apply
whereas terraform plan
is happily capable of understanding that the keys of aws_subnet.public
are known.
This is potentially helpful for reproducing this issue, as whilst it does require valid AWS credentials, it doesn't require any existing Terraform state, nor actually creating any AWS resources.
Unfortunately I think we have some conflicting requirements here. We made this change specifically because it is apparently insufficient to work with just the target of the import in isolation, as discussed in #27934. Others expect Terraform to evaluate the full configuration as if planning so that they can e.g. have a provider configuration which relies on the result of a data resource.
To me, there is 2 issues now whose solutions would still be compatible with #27934:
To me, the first point should be that you need to consider the entire graph, but only up to the resource you import. Like a terraform plan -target
would do. Every time I encountered this error, it was further down the graph, on an unrelated branch of the graph, exactly like in @maxb example.
The second point is about information propagation, and how it differs in import. Basically, the map as a whole might be unknown, but the keys of the map are well known, even when the map itself comes from a for_each
ed resource. Indeed, the keys are the same, so if they were known in the first resource block, they are still known in the second resource block. Again, this information propagation clearly works in plan.
The refactoring we did here was intended as a first step in that direction -- with deprecation of the separate terraform import command being the ultimate end goal
Why would this be a goal at all? The abilitly to import is a critical capability for refactoring, for ingesting existing infrastructure into terraform for management.. In the real world, one can't just destroy and recreate resources to bring them into a terraform state for management..
Deprecating the terraform import
command is not the same as deprecating the entire concept of importing.
Rather, the suggestion is that importing needs to become part of the full plan-and-apply process, because some Terraform configurations can contain dependencies on values being known, which are very tricky to get right whilst trying to treat import as a special one off operation outside of the usual logic.
Deprecating the terraform import command is not the same as deprecating the entire concept of importing.
thank you for clarification. is the team envisioning something like the move
blocks -- but import
?
if this development is distant, @lemaitre 's suggestion above would be beneficial in the current transitionary state.
Import blocks are already present in 1.5. https://developer.hashicorp.com/terraform/language/import
@jabberwik thanks! Indeed I was able to replace my "terraform import" command with such import blocks in a .tf file.
I now have to generate the files in my script (and keep them in sync with where the values come from, add them to .gitignore), so it's clearly a bit more "clunky" than before, but it works!
It would have been nice for terraform keep the old import command beyond 1.2.9 (at least until after 1.5), and print a warning like
"terraform import is deprecated, and will be removed with 1.6. Please replace with an import ressource in your terraform templates"
Thanks for the feedback. The terraform import
command is not deprecated, and will not be removed in v1.6. We welcome suggestions as to how to improve our documentation and changelog.
Even with import blocks, the terraform import
tool was great for making surgical edits to state without needing completely valid terraform code. I would really like a "dumb" state manipulation command like maybe terraform state add
.
Right now we have two types of terraform modules, one that was rewritten from scratch with new infra created along with it, and one that we consider legacy. We can't afford to recreate our legacy environment, but we still want to use our new modules in it. So we're trying to retrofit this legacy env into our new modules so that we have one consistent set of modules. However, with the lack of a "dumb" import command, it's extremely difficult to import the legacy env's resources into our new modules, because terraform keeps trying to read data blocks that don't exist, or validate for_each that rely on resources that don't exist. The resources all exist, but we just need to import them
Terraform Version
Terraform Configuration Files
main.tf
managmenthost/main.tf
Test example: https://github.com/MartinEmrich/tf-import-test-example
Debug Output
https://gist.github.com/MartinEmrich/6ff2c5dc77ea1cbc6dd02fb0345c080e
Expected Behavior
I try to import a resource (
aws_route53_zone.platformdomain
) before issuingterraform apply
etc.):With terraform up to 1.2.9, this worked. Starting with terraform 1.3.0, it no longer works.
Actual Behavior
As of terraform 1.3.0, I receive this error message:
Steps to Reproduce
./terraform init
./terraform import aws_route53_zone.platformdomain AHOSTEDZONEID
Additional Context
With any of terraform 1.2.9, 1.3.0 or 1.3.3, I can issue
terraform plan
orterraform apply
with no issues. Indeed, as thatmanagementhost
module/subdirectory depends on thevpc
module anyways, applying this in one go is no problem, so I see no dependency issues.Apparently starting with 1.3.0, terraform tries to "calculate" more information during import, and without that
vpc
module already rolled out, thatdata
element cannot be calculated.But as these resources are unrelated to the resource I try to import, I consider this a bug, I expect
terraform import
to just write the ID into the state store.I hope the debug log alone and the pseudocode is helpful. If absolutely necessary, I still can try to condense the project into a publishable git repository for reproduction.
References